Skip to main content

Heteroscedastic uncertainty estimates for fair algorithms.

Project description

FairlyUncertain Documentation

Table of Contents

How to Add a New Dataset

To add a new dataset, follow these steps.

  1. Import Necessary Libraries

    Ensure you have the required libraries imported:

    import pickle
    import os
    import numpy as np
    import pandas as pd
    import requests
    
  2. Create Dataset Loader Function

Implement a function to load your new dataset. Here is an example:

def get_new_dataset():
    # Add logic to download, preprocess and return your dataset
    pass
  1. Update Data Loaders Dictionary

Add the new dataset loader function to the dictionary of data loaders:

dataloaders = {
    'New Dataset': get_new_dataset,
    # other datasets...
}
  1. Ensure Dataset Caching

Make sure your dataset is cached for quick access:

def cache_dataset(name):
    # Logic to cache the dataset
    pass

def read_dataset(name):
    # Logic to read the cached dataset
    pass
  ```
5. **Load Dataset Instance**

Ensure that the dataset can be loaded correctly:

```python
def load_instance(name, train_split=.8):
    # Logic to load dataset instance
    pass

How to Write a New Experiment

To write a new experiment, follow these steps:

  1. Import Necessary Libraries

Ensure all required libraries are imported:

from tqdm import tqdm
import numpy as np
...
import fairlyuncertain as fu
  1. Define Algorithms to Test

List the algorithms you want to test:

algorithm_names = ['Algorithm1', 'Algorithm2']
algorithms = {name: fu.algorithms[name] for name in algorithm_names}
  1. Run Experiments

Loop through datasets and algorithms to run your experiments:

results = {}
for dataset in tqdm(yp.datasets):
    instance = fu.load_instance(dataset)
    results[dataset] = {'instance': instance}
    for algo_name in algorithms:
        results[dataset][algo_name] = algorithms[algo_name](instance)
  1. Evaluate and Plot Results

Evaluate the results using the appropriate metrics and plot the outcomes:

for metric_name in fu.metrics:
    fu.plot_results(results, algorithms, fu.datasets, metric_name=metric_name)

How to Add a New Model/Algorithm

To add a new model or algorithm, follow these steps:

  1. Add your Model Class if Necessary

If your new algorithm requires a new model class, make those changes:

class Model:
    def __init__(self, **kwargs):
        # Initialization logic
        pass

    def fit(self, X, y):
        # Fit logic
        pass

    def predict(self, X):
        # Prediction logic
        pass
  ```

2. **Import Necessary Libraries**

Ensure necessary libraries and modules are imported:

```python
import numpy as np
...
import fairlyuncertain as fu
  1. Define the New Algorithm

Implement the logic for your new algorithm:

def new_algorithm(instance, Model):
    # Algorithm implementation
    pass
  1. Add Algorithm to Dictionary

Update the algorithms dictionary to include your new algorithm:

algorithms = {
    'New Algorithm': new_algorithm,
    # other algorithms...
}

How to Add a New Metric/Evaluation

To add a new metric or evaluation, follow these steps:

  1. Import Necessary Libraries

Import required libraries for your metric:

import numpy as np
...
  1. Define the New Metric

Implement the logic for the new metric:

def new_metric(pred, y, group):
    # Metric calculation logic
    pass
  1. Add Metric to Dictionary

Update the metrics dictionary to include your new metric:

metrics = {
    'New Metric': new_metric,
    # other metrics...
}
  1. Use New Metric in Evaluation

Ensure the new metric is used in the evaluation process:

def evaluate_metrics(results, metrics):
    for metric_name in metrics:
        for dataset in results:
            results[dataset][metric_name] = metrics[metric_name](
                results[dataset]['pred'],
                results[dataset]['y'],
                results[dataset]['group']
            )

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fairlyuncertain-0.0.3.tar.gz (22.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fairlyuncertain-0.0.3-py3-none-any.whl (27.2 kB view details)

Uploaded Python 3

File details

Details for the file fairlyuncertain-0.0.3.tar.gz.

File metadata

  • Download URL: fairlyuncertain-0.0.3.tar.gz
  • Upload date:
  • Size: 22.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.3

File hashes

Hashes for fairlyuncertain-0.0.3.tar.gz
Algorithm Hash digest
SHA256 92eb9bd034ed39aaa5ff6cfec867dc778304cb45684fadb6c986c0d6c22035ea
MD5 9f12626c46ec5e3aee1e91dc7644edb8
BLAKE2b-256 ada9393b62b68edc180954187f833b7a2643544f482e7518787f1da801346cae

See more details on using hashes here.

File details

Details for the file fairlyuncertain-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for fairlyuncertain-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a47600a5deb5fe78cce1deff769d8b83fab560754e02db9617aa5e165e1d1e3c
MD5 f9a81037dc1846231402836283ab418a
BLAKE2b-256 e77b45379189973cf2b4371a7f9cf328657b858fc947ec1776eb4ba383434fbf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page