Skip to main content

Heteroscedastic uncertainty estimates for fair algorithms.

Project description

FairlyUncertain Documentation

Table of Contents

How to Add a New Dataset

To add a new dataset, follow these steps.

  1. Import Necessary Libraries

    Ensure you have the required libraries imported:

    import pickle
    import os
    import numpy as np
    import pandas as pd
    import requests
    
  2. Create Dataset Loader Function

Implement a function to load your new dataset. Here is an example:

def get_new_dataset():
    # Add logic to download, preprocess and return your dataset
    pass
  1. Update Data Loaders Dictionary

Add the new dataset loader function to the dictionary of data loaders:

dataloaders = {
    'New Dataset': get_new_dataset,
    # other datasets...
}
  1. Ensure Dataset Caching

Make sure your dataset is cached for quick access:

def cache_dataset(name):
    # Logic to cache the dataset
    pass

def read_dataset(name):
    # Logic to read the cached dataset
    pass
  ```
5. **Load Dataset Instance**

Ensure that the dataset can be loaded correctly:

```python
def load_instance(name, train_split=.8):
    # Logic to load dataset instance
    pass

How to Write a New Experiment

To write a new experiment, follow these steps:

  1. Import Necessary Libraries

Ensure all required libraries are imported:

from tqdm import tqdm
import numpy as np
...
import fairlyuncertain as fu
  1. Define Algorithms to Test

List the algorithms you want to test:

algorithm_names = ['Algorithm1', 'Algorithm2']
algorithms = {name: fu.algorithms[name] for name in algorithm_names}
  1. Run Experiments

Loop through datasets and algorithms to run your experiments:

results = {}
for dataset in tqdm(yp.datasets):
    instance = fu.load_instance(dataset)
    results[dataset] = {'instance': instance}
    for algo_name in algorithms:
        results[dataset][algo_name] = algorithms[algo_name](instance)
  1. Evaluate and Plot Results

Evaluate the results using the appropriate metrics and plot the outcomes:

for metric_name in fu.metrics:
    fu.plot_results(results, algorithms, fu.datasets, metric_name=metric_name)

How to Add a New Model/Algorithm

To add a new model or algorithm, follow these steps:

  1. Add your Model Class if Necessary

If your new algorithm requires a new model class, make those changes:

class Model:
    def __init__(self, **kwargs):
        # Initialization logic
        pass

    def fit(self, X, y):
        # Fit logic
        pass

    def predict(self, X):
        # Prediction logic
        pass
  ```

2. **Import Necessary Libraries**

Ensure necessary libraries and modules are imported:

```python
import numpy as np
...
import fairlyuncertain as fu
  1. Define the New Algorithm

Implement the logic for your new algorithm:

def new_algorithm(instance, Model):
    # Algorithm implementation
    pass
  1. Add Algorithm to Dictionary

Update the algorithms dictionary to include your new algorithm:

algorithms = {
    'New Algorithm': new_algorithm,
    # other algorithms...
}

How to Add a New Metric/Evaluation

To add a new metric or evaluation, follow these steps:

  1. Import Necessary Libraries

Import required libraries for your metric:

import numpy as np
...
  1. Define the New Metric

Implement the logic for the new metric:

def new_metric(pred, y, group):
    # Metric calculation logic
    pass
  1. Add Metric to Dictionary

Update the metrics dictionary to include your new metric:

metrics = {
    'New Metric': new_metric,
    # other metrics...
}
  1. Use New Metric in Evaluation

Ensure the new metric is used in the evaluation process:

def evaluate_metrics(results, metrics):
    for metric_name in metrics:
        for dataset in results:
            results[dataset][metric_name] = metrics[metric_name](
                results[dataset]['pred'],
                results[dataset]['y'],
                results[dataset]['group']
            )

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fairlyuncertain-0.0.2.tar.gz (21.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fairlyuncertain-0.0.2-py3-none-any.whl (27.0 kB view details)

Uploaded Python 3

File details

Details for the file fairlyuncertain-0.0.2.tar.gz.

File metadata

  • Download URL: fairlyuncertain-0.0.2.tar.gz
  • Upload date:
  • Size: 21.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.1

File hashes

Hashes for fairlyuncertain-0.0.2.tar.gz
Algorithm Hash digest
SHA256 c0c4e0946f826c71eabb530ddad16f1866a32a5082130aa55e6f38e02fcdfdc9
MD5 0192f2319eb4a6d1c5f033b0db379822
BLAKE2b-256 eeadc3742ff7c673761cd8e27ad2e46aa3173407f4d7c3618b0d0406b5f7215b

See more details on using hashes here.

File details

Details for the file fairlyuncertain-0.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for fairlyuncertain-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e8c6d0d611d7b43748679560b0fc5ccd324e423d19325a236be02fb0085fad83
MD5 3a4d8cfb1c35661cbd7ad053102ea234
BLAKE2b-256 677776b3de0faa5b5d6b9c74250acf498681f3afb8d4bd7c4caa5745760d3dd9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page