Skip to main content

Heteroscedastic uncertainty estimates for fair algorithms.

Project description

FairlyUncertain Documentation

Table of Contents

How to Add a New Dataset

To add a new dataset, follow these steps.

  1. Import Necessary Libraries

    Ensure you have the required libraries imported:

    import pickle
    import os
    import numpy as np
    import pandas as pd
    import requests
    
  2. Create Dataset Loader Function

Implement a function to load your new dataset. Here is an example:

def get_new_dataset():
    # Add logic to download, preprocess and return your dataset
    pass
  1. Update Data Loaders Dictionary

Add the new dataset loader function to the dictionary of data loaders:

dataloaders = {
    'New Dataset': get_new_dataset,
    # other datasets...
}
  1. Ensure Dataset Caching

Make sure your dataset is cached for quick access:

def cache_dataset(name):
    # Logic to cache the dataset
    pass

def read_dataset(name):
    # Logic to read the cached dataset
    pass
  ```
5. **Load Dataset Instance**

Ensure that the dataset can be loaded correctly:

```python
def load_instance(name, train_split=.8):
    # Logic to load dataset instance
    pass

How to Write a New Experiment

To write a new experiment, follow these steps:

  1. Import Necessary Libraries

Ensure all required libraries are imported:

from tqdm import tqdm
import numpy as np
...
import fairlyuncertain as fu
  1. Define Algorithms to Test

List the algorithms you want to test:

algorithm_names = ['Algorithm1', 'Algorithm2']
algorithms = {name: fu.algorithms[name] for name in algorithm_names}
  1. Run Experiments

Loop through datasets and algorithms to run your experiments:

results = {}
for dataset in tqdm(yp.datasets):
    instance = fu.load_instance(dataset)
    results[dataset] = {'instance': instance}
    for algo_name in algorithms:
        results[dataset][algo_name] = algorithms[algo_name](instance)
  1. Evaluate and Plot Results

Evaluate the results using the appropriate metrics and plot the outcomes:

for metric_name in fu.metrics:
    fu.plot_results(results, algorithms, fu.datasets, metric_name=metric_name)

How to Add a New Model/Algorithm

To add a new model or algorithm, follow these steps:

  1. Add your Model Class if Necessary

If your new algorithm requires a new model class, make those changes:

class Model:
    def __init__(self, **kwargs):
        # Initialization logic
        pass

    def fit(self, X, y):
        # Fit logic
        pass

    def predict(self, X):
        # Prediction logic
        pass
  ```

2. **Import Necessary Libraries**

Ensure necessary libraries and modules are imported:

```python
import numpy as np
...
import fairlyuncertain as fu
  1. Define the New Algorithm

Implement the logic for your new algorithm:

def new_algorithm(instance, Model):
    # Algorithm implementation
    pass
  1. Add Algorithm to Dictionary

Update the algorithms dictionary to include your new algorithm:

algorithms = {
    'New Algorithm': new_algorithm,
    # other algorithms...
}

How to Add a New Metric/Evaluation

To add a new metric or evaluation, follow these steps:

  1. Import Necessary Libraries

Import required libraries for your metric:

import numpy as np
...
  1. Define the New Metric

Implement the logic for the new metric:

def new_metric(pred, y, group):
    # Metric calculation logic
    pass
  1. Add Metric to Dictionary

Update the metrics dictionary to include your new metric:

metrics = {
    'New Metric': new_metric,
    # other metrics...
}
  1. Use New Metric in Evaluation

Ensure the new metric is used in the evaluation process:

def evaluate_metrics(results, metrics):
    for metric_name in metrics:
        for dataset in results:
            results[dataset][metric_name] = metrics[metric_name](
                results[dataset]['pred'],
                results[dataset]['y'],
                results[dataset]['group']
            )

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fairlyuncertain-0.0.4.tar.gz (22.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fairlyuncertain-0.0.4-py3-none-any.whl (27.3 kB view details)

Uploaded Python 3

File details

Details for the file fairlyuncertain-0.0.4.tar.gz.

File metadata

  • Download URL: fairlyuncertain-0.0.4.tar.gz
  • Upload date:
  • Size: 22.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.3

File hashes

Hashes for fairlyuncertain-0.0.4.tar.gz
Algorithm Hash digest
SHA256 d664426e9fb3d2c45000009f71d414c5a03fd245e8a447215e20da1576ecaee4
MD5 1017d39ca4f32d63828ff8358e6c8a5f
BLAKE2b-256 081aba7483d0a1c7cea665de613717f617e4817985089c7b25076cd21a15e45f

See more details on using hashes here.

File details

Details for the file fairlyuncertain-0.0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for fairlyuncertain-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 f2c557a653905c258c08ef3a3ee58826be7d7ae789432ed123a70b2a4fff5755
MD5 60c190f4867cae84515a42aad113d1d7
BLAKE2b-256 1b16ba244d49cbe0ecb22b87a3b98d823d5eeace73a61126bd0a469f07cf9b14

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page