Skip to main content

Deploy locally saved machine learning models to a live REST API and integrated dashboard.

Project description

The mission of the AI Model Share Platform is to provide a trusted non profit repository for machine learning model prediction APIs (python library + integrated website at modelshare.org). A beta version of the platform is currently being used by Columbia University students, faculty, and staff to test and improve platform functionality.

In a matter of seconds, data scientists can launch a model into this infrastructure and end-users the world over will be able to engage their machine learning models.

  • Launch machine learning models into scalable production ready prediction REST APIs using a single Python function.

  • Details about each model, how to use the model's API, and the model's author(s) are deployed simultaneously into a searchable website at modelshare.org.

  • Deployed models receive an individual Model Playground listing information about all deployed models. Each of these pages includes a fully functional prediction dashboard that allows end-users to input text, tabular, or image data and receive live predictions.

  • Moreover, users can build on model playgrounds by 1) creating ML model competitions, 2) uploading Jupyter notebooks to share code, 3) sharing model architectures and 4) sharing data... with all shared artifacts automatically creating a data science user portfolio.

Use aimodelshare Python library to deploy your model, create a new ML competition, and more.

Find model playground web-dashboards to generate predictions now.

Installation

Install using PyPi

pip install aimodelshare

Install on Anaconda

Conda/Mamba Install ( For Mac and Linux Users Only , Windows Users should use pip method ) :

Make sure you have conda version >=4.9

You can check your conda version with:

conda --version

To update conda use:

conda update conda 

Installing aimodelshare from the conda-forge channel can be achieved by adding conda-forge to your channels with:

conda config --add channels conda-forge
conda config --set channel_priority strict

Once the conda-forge channel has been enabled, aimodelshare can be installed with conda:

conda install aimodelshare

or with mamba:

mamba install aimodelshare

Moral Compass: Dynamic Metric Support for AI Ethics Challenges

The Moral Compass system now supports tracking multiple performance metrics for fairness-focused AI challenges. Track accuracy, demographic parity, equal opportunity, and other fairness metrics simultaneously.

Quick Start with Multi-Metric Tracking

from aimodelshare.moral_compass import ChallengeManager

# Create a challenge manager
manager = ChallengeManager(
    table_id="fairness-challenge-2024",
    username="your_username"
)

# Track multiple metrics
manager.set_metric("accuracy", 0.85, primary=True)
manager.set_metric("demographic_parity", 0.92)
manager.set_metric("equal_opportunity", 0.88)

# Track progress
manager.set_progress(tasks_completed=3, total_tasks=5)

# Sync to leaderboard
result = manager.sync()
print(f"Moral compass score: {result['moralCompassScore']:.4f}")

Moral Compass Score Formula

moralCompassScore = primaryMetricValue × ((tasksCompleted + questionsCorrect) / (totalTasks + totalQuestions))

This combines:

  • Performance: Your primary metric value (e.g., fairness score)
  • Progress: Your completion rate across tasks and questions

Features

  • Multiple Metrics: Track accuracy, fairness, robustness, and custom metrics
  • Primary Metric Selection: Choose which metric drives leaderboard ranking
  • Progress Tracking: Monitor task and question completion
  • Automatic Scoring: Server-side computation of moral compass scores
  • Leaderboard Sorting: Automatic ranking by moral compass score
  • Backward Compatible: Existing users without metrics continue to work

Example: Justice & Equity Challenge

See Justice & Equity Challenge Example for detailed examples including:

  • Multi-metric fairness tracking
  • Progressive challenge completion
  • Leaderboard queries
  • Custom fairness criteria

API Methods

ChallengeManager

from aimodelshare.moral_compass import ChallengeManager

manager = ChallengeManager(table_id="my-table", username="user1")

# Set metrics
manager.set_metric("accuracy", 0.90, primary=True)
manager.set_metric("fairness", 0.95)

# Set progress
manager.set_progress(tasks_completed=4, total_tasks=5)

# Preview score locally
score = manager.get_local_score()

# Sync to server
result = manager.sync()

API Client

from aimodelshare.moral_compass import MoralcompassApiClient

client = MoralcompassApiClient()

# Update moral compass with metrics
result = client.update_moral_compass(
    table_id="my-table",
    username="user1",
    metrics={"accuracy": 0.90, "fairness": 0.95},
    primary_metric="fairness",
    tasks_completed=4,
    total_tasks=5
)

Documentation

Moral Compass API URL Configuration

The Moral Compass API client requires a base URL to connect to the REST API. The URL is resolved in the following order:

For CI/CD Environments

In GitHub Actions workflows, the MORAL_COMPASS_API_BASE_URL environment variable is automatically exported from Terraform outputs:

- name: Initialize Terraform and get API URL
  working-directory: infra
  run: |
    terraform init
    terraform workspace select dev || terraform workspace new dev
    API_URL=$(terraform output -raw api_base_url)
    echo "MORAL_COMPASS_API_BASE_URL=$API_URL" >> $GITHUB_ENV

For Local Development

When developing locally, the API client attempts to resolve the URL in this order:

  1. Environment variable - Set MORAL_COMPASS_API_BASE_URL or AIMODELSHARE_API_BASE_URL:

    export MORAL_COMPASS_API_BASE_URL="https://api.example.com/v1"
    
  2. Cached Terraform outputs - The client looks for infra/terraform_outputs.json

  3. Terraform command - As a fallback, executes terraform output -raw api_base_url in the infra/ directory

Graceful Test Skipping

Integration tests that require the Moral Compass API will skip gracefully if the URL cannot be resolved, rather than failing. This allows the test suite to run in environments where the infrastructure is not available (e.g., forks without access to AWS resources).

Resource Cleanup

During testing, aimodelshare creates AWS resources including API Gateway REST APIs (playgrounds) and IAM users. To manage and clean up these resources:

Cleanup Script

Use the interactive cleanup script to identify and delete test resources:

# Preview resources without deleting (safe)
python scripts/cleanup_test_resources.py --dry-run

# Interactive cleanup
python scripts/cleanup_test_resources.py

# Cleanup in a specific region
python scripts/cleanup_test_resources.py --region us-west-2

The script will:

  • List all API Gateway REST APIs (playgrounds) in the region
  • List IAM users created by the test framework (prefix: temporaryaccessAImodelshare)
  • Show associated resources (policies, access keys)
  • Allow you to select which resources to delete
  • Safely delete selected resources with proper cleanup order

GitHub Action

You can also trigger the cleanup workflow from the GitHub Actions tab:

  1. Go to ActionsCleanup Test Resources
  2. Click Run workflow
  3. Select dry-run mode to preview resources
  4. Review the output and run locally to delete resources

For complete documentation, see CLEANUP_RESOURCES.md.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aimodelshare-0.5.77.tar.gz (5.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aimodelshare-0.5.77-py3-none-any.whl (4.4 MB view details)

Uploaded Python 3

File details

Details for the file aimodelshare-0.5.77.tar.gz.

File metadata

  • Download URL: aimodelshare-0.5.77.tar.gz
  • Upload date:
  • Size: 5.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aimodelshare-0.5.77.tar.gz
Algorithm Hash digest
SHA256 9c2c5e34f923c9d8c574bcfb436fd9659ad77ec0826ada56eb959098b6aeaa60
MD5 9dbd2d18fa72cdee016e333a14124ee1
BLAKE2b-256 e1eb8ace2b234429e5067955e92c753e5dcaa5495ae53cf59571df60aebcdf3f

See more details on using hashes here.

Provenance

The following attestation bundles were made for aimodelshare-0.5.77.tar.gz:

Publisher: pypi-manual-publish.yml on mparrott-at-wiris/aimodelshare

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file aimodelshare-0.5.77-py3-none-any.whl.

File metadata

  • Download URL: aimodelshare-0.5.77-py3-none-any.whl
  • Upload date:
  • Size: 4.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aimodelshare-0.5.77-py3-none-any.whl
Algorithm Hash digest
SHA256 68e7ca6bc91c6d6bfa01b497aa66bb1a5fe51accb3b322bfd88527ebb6af1e93
MD5 c59654e316ec5d77c0effc4ce8fef342
BLAKE2b-256 3b10214f4354d4228f0289d04b223ac3fbb35541bdfe129c58900d7c99aa4bd6

See more details on using hashes here.

Provenance

The following attestation bundles were made for aimodelshare-0.5.77-py3-none-any.whl:

Publisher: pypi-manual-publish.yml on mparrott-at-wiris/aimodelshare

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page