Convenient interface that provides structured representations of the PET dataset hosted on Huggingface and benchmarks to test approaches on
Project description
Benchmarking procedure to test approaches on the PET-dataset (hosted on huggingface).
This is an alpha version. Do not use it in production since names and functions may change.
Documentation will come soon.
Example of ‘’how to benchmark an approach’’
from petbenchmarks.benchmarks import BenchmarkApproach
BenchmarkApproach(tested_approach_name='Approach-name',
predictions_file_or_folder='path-to-prediction-file.json')
The BenchmarkApproach object does all the job. It reads the file, computes score and generates a reports in the same directory of the path-to-prediction-file.json.
Created by Patrizio Bellan.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
petbenchmarks-0.0.1a1.tar.gz
(12.9 kB
view hashes)