Skip to main content

Python SDK to interact with the GenGuardX Platform

Project description

GenGuardX Python SDK

Python Version License

GenGuardX Python SDK provides a python interface to interact with the GenGuardX Platform - an enterprise-grade AI governance, model management, and monitoring solution from Corridor Platforms.

Installation

Using pip

pip install genguardx

Quick Start

1. Initialize Connection

Connect to your GenGuardX platform instance:

import genguardx as ggx

# Initialize with your API key
ggx.init(api_key='your-api-key-here')

For custom deployments, specify the API URL:

ggx.init(
    api_key='your-api-key-here',
    api_url='your-genguardx-instance-url'
)

2. Check Your Connection

# Verify who you're logged in as
ggx.whoami()
# Output: Logged in as 'John Doe' to workspace 'corridor'. Any changes made in this session will be tracked under the user 'john.doe'.

3. Work with AI Pipelines

# Access a registered pipeline
chatbot = ggx.Pipeline('customer_support_bot')

# View pipeline details
print(f"Pipeline: {chatbot.name} (v{chatbot.version})")
print(f"Status: {chatbot.current_status}")
print(f"Description: {chatbot.description}")

# Simulate the pipeline with test inputs
results = chatbot(
    user_message="What's my account balance?",
    context={"customer_id": "12345"}
)
print(results)

4. List Available Components

# List all available pipelines
all_pipelines = ggx.Pipeline.all()
for pipeline in all_pipelines:
    print(f"- {pipeline.name} (v{pipeline.version})")

# Filter pipelines by group
ml_pipelines = ggx.Pipeline.all(group='Machine Learning')

# Search across all pipelines
qa_pipelines = ggx.Pipeline.all(contains='question')

Core Concepts

DataTable

Represents registered datasets that can be queried and analyzed:

# Access by alias or name
table = ggx.DataTable(alias='customer_data')

# Get column information
for column in table.columns:
    print(f"{column.alias}: {column.type}")

# Access as PySpark DataFrame
df = table.to_spark()

# Get data types dictionary
dtypes = table.dtypes  # {'customer_id': 'str', 'signup_date': 'datetime', ...}

Model (Foundation Models)

Manage and version AI models:

# Load a model
gpt_model = ggx.Model('gpt4-turbo')

# Check model properties
print(f"Provider: {gpt_model.provider}")
print(f"Model Type: {gpt_model.type}")
print(f"Version: {gpt_model.version}")

# Simulate model execution
result = gpt_model(
    prompt="Explain quantum computing",
    temperature=0.7
)

Prompt

Manage versioned prompt templates:

# Access a prompt template
prompt = ggx.Prompt('classification_prompt')

# View template and arguments
print(f"Template: {prompt.template}")
print(f"Arguments: {prompt.arguments}")

# Get prompt metadata
print(f"Task Type: {prompt.current_status}")
print(f"Group: {prompt.group}")

RAG (Retrieval-Augmented Generation)

Access RAG systems for context-aware AI:

# Load a RAG configuration
knowledge_rag = ggx.Rag('product_knowledge_base')

# Check RAG details
print(f"Type: {knowledge_rag.type}")
print(f"Description: {knowledge_rag.description}")

# Simulate RAG retrieval
results = knowledge_rag(
    query="What are the product specifications?",
    top_k=5
)

Pipeline

Orchestrate complex AI workflows:

# Access pipeline
pipeline = ggx.Pipeline('sentiment_analyzer')

# View pipeline configuration
print(f"Type: {pipeline.pipeline_type}")
print(f"Inputs: {pipeline.input_models}")
print(f"Prompts: {pipeline.input_prompts}")

# Check permissible purposes
print(f"Allowed for: {pipeline.permissible_purpose}")

# For chat-based pipelines, access chat sessions
if pipeline.pipeline_type == "Chat based - OpenAI Spec":
    sessions = pipeline.chat_sessions
    for session in sessions[:5]:
        print(f"Session: {session.name}")

Searching and Filtering

All main components support powerful search and filtering:

# Search by name
models = ggx.Model.all(name='gpt')

# Search by contains
pipelines = ggx.Pipeline.all(contains='customer')

# Filter by group
qa_checks = ggx.QualityCheck.all(group='Data Quality')

# Filter by status
approved_models = ggx.Model.all(status='approved')

# Combine filters
results = ggx.Pipeline.all(
    group='Production',
    status='approved',
    contains='chatbot'
)

Documentation

Support

For support, please contact:


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

genguardx-2025.12.24-py3-none-any.whl (84.6 kB view details)

Uploaded Python 3

File details

Details for the file genguardx-2025.12.24-py3-none-any.whl.

File metadata

  • Download URL: genguardx-2025.12.24-py3-none-any.whl
  • Upload date:
  • Size: 84.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for genguardx-2025.12.24-py3-none-any.whl
Algorithm Hash digest
SHA256 69c43344883cb3289385499ad5d6f4d6e10afd05327ef087e5b3d2e51ca2e03a
MD5 c49fa1bce9037d58e29636e1689613c9
BLAKE2b-256 54e63a06878d204a8d77bda09d036c4faef26e88985205d949f98c910c2c9fdd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page