Skip to main content

A contextual bandit benchmarking package.

Project description

Coba

What is it?

Coba is a powerful benchmarking framework built specifically for research with contextual bandit algorithms.

How do you benchmark?

Think for a second about the last time you benchmarked an algorithm or dataset and ask yourself

  1. Was it easy to add new data sets?
  2. Was it easy to add new algorithms?
  3. Was it easy to create, run and share benchmarks?

The Coba Way

Coba was built from the ground up to do all that and more.

Coba is...

  • ... light-weight (it has no dependencies to get started)
  • ... distributed (it was built to work across the web with caching, api-key support, checksums and more)
  • ... verbose (it has customizable, hierarchical logging for meaningful, readable feedback on log running jobs)
  • ... robust (benchmarks write every action to file so they can always be resumed whenever your system crashes)
  • ... just-in-time (no resources are loaded until needed, and they are released immediately to keep memory small)
  • ... a duck? (Coba relies only on duck-typing so no inheritance is needed to implement our interfaces)

But don't take our word for it. We encourage you to look at the code yourself or read more below.

Workflow

Coba is architected around a simple workflow: Simulations -> Benchmark -> Learners -> Results.

Simulations contain all the necessary logic to define an environment. With a collection of simulations we then define a Benchmark. Benchmarks possess all the rules performance evaluation. Finally, once we have a benchmark we can then apply that benchmark to learners to see how a learner they perform on the benchmark.

Simulations

Simulations are the core unit of evaluation in Coba. They are nothing more than a collection of interactions with an environment and potential rewards. A number of tools have been built into Coba to make simulation creation easier. All these tools are defined in the coba.simulations module. We describe these tools in more detail below.

Importing Simulations From Classification Data Sets

Classification data sets are the easiest way to quickly evaluate CB algorithms with Coba. Coba natively supports:

  • Binay, multiclass and multi-label problems
  • Dense and sparse representations
  • Openml, Csv, Arff, Libsvm, and the extreme classification (Manik) format
  • Local files and files over http (with local caching)

The classification simulations built into Coba are OpenmlSimulation, CsvSimulation, ArffSimulation, LibsvmSimulation, and ManikSimulation.

Generating Simulations From Generative Functions

Sometimes we have well defined models that an agent has to make decisions within. To support evaluation in these domains one can use LambdaSimulation to define generative functions for .

Creating Simulations From Scratch

If more customization is needed beyond what is offered above then you can easily create your own simulation by implementing Coba's simple Simulation interface.

Benchmarks

The Benchmark class contains all the logic for learner performance evaluation. This includes both evaluation logic (e.g., which simulations and how many interactions) and execution logic (e.g., how many processors to use and where to write results). There is only one Benchmark implementation in Coba and it can be found in the coba.benchmarks module.

Learners

Learners are algorithms which are able to improve their action selection through interactions with simulations.

A number of algorithms are provided natively with Coba for quick comparsions. These include:

  • All contextual bandit learners in VowpalWabbit
  • UCB1-Tuned Bandit Learner by Auer et al. 2002
  • Corral by Agarwal et al. 2017

Examples

An examples directory is included in the repository with a number of code demonstrations and benchmark demonstrations. These examples show how to create benchmarks, evaluate learners against them and plot the results.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

coba-3.2.2.tar.gz (56.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

coba-3.2.2-py3-none-any.whl (72.4 kB view details)

Uploaded Python 3

File details

Details for the file coba-3.2.2.tar.gz.

File metadata

  • Download URL: coba-3.2.2.tar.gz
  • Upload date:
  • Size: 56.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.3.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.6.13

File hashes

Hashes for coba-3.2.2.tar.gz
Algorithm Hash digest
SHA256 b30bc24b1ae2ba9266ed0f14175cae444dff22bba0f3c04bdd2a41681bc4a9e7
MD5 5125fe1921ab0dcc0a986d607cf5dbb3
BLAKE2b-256 957e3b0cb05b6e445d6da3eab0950d1b502c23e09093a10c959a7d97b0483e76

See more details on using hashes here.

File details

Details for the file coba-3.2.2-py3-none-any.whl.

File metadata

  • Download URL: coba-3.2.2-py3-none-any.whl
  • Upload date:
  • Size: 72.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.3.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.6.13

File hashes

Hashes for coba-3.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 6d8ecdc09b4bd142e190e3a6facdc05ae444c77fae26adb1df773596cc6a36b4
MD5 562e8b0a61d2b815228e2cf11c1a03d3
BLAKE2b-256 e15fcd6bb7d5fad87ba0a3f7e3927fddef5405f437ab44501dc1f6703b807425

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page