Skip to main content

Finetune_Eval_Harness

Project description

Finetune-Evaluation-Harness

Build Status Build Status

Overview

This project is a unified framework for evaluation of various LLMs on a large number of different evaluation tasks. Some of the features of this framework:

  • Different types of tasks supported: Classification, NER tagging, Question-Answering
  • Support for parameter efficient tuning (PEFT)
  • Running mutliple tasks altogether

Basic Usage

To evaluate a model (eg GERMAN-BERT) on task, please use something like this:

python main.py --model_name_or_path bert-base-german-cased \
--task_list germeval2018 \
--results_logging_dir /sample/directory/results \
--output_dir /sample/directory

This framework is build on top of Huggingface, hence all the keyword arguments used in regular HF transformers library work here as well: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py.

Some Important Arguments

--model_name_or_path MODEL_NAME_OR_PATH
    Path to pretrained model or model identifier from huggingface.co/models (default: None)

--task_list TASK_LIST [TASK_LIST ...]
    List of tasks passed in order. (default: None) eg germeval2018, germeval2017, gnad10, german_europarl

--results_logging_dir RESULTS_LOGGING_DIR
   Where do you want to save the results of the run as a json file (default: None)

--output_dir OUTPUT_DIR
	The output directory where the model predictions and checkpoints will be written. (default: None)

--num_train_epochs NUM_TRAIN_EPOCHS
    Total number of training epochs to perform. (default: 1.0)

--per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE
    Batch size per GPU/TPU core/CPU for training. (default: 8)

--use_fast_tokenizer [USE_FAST_TOKENIZER]
    Whether to use one of the fast tokenizer (backed by the tokenizers library) or not. (default: True)

If you fail to understand what any of the paramater does, --help is your friend.

List of Supported Tasks

Implementing New Tasks

To implement a new task in eval harness, see this guide.

Evaluating the Coverage of the Current Code

Please go to Github Actions sections of this repository and start the build named "Evaluate", this would check if the coverage on existing code is more than 80%. The build status is also visible on the main repo page.

Guidelines On Running Tasks

  • In some instances for specific tasks, please make sure to specify the exact dataset config depending on your needs
  • If text sequence processing fails for some classification, please try with setting --use_fast_tokenizer as False

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

finetune_eval-0.6.0.dev1.tar.gz (44.3 kB view details)

Uploaded Source

File details

Details for the file finetune_eval-0.6.0.dev1.tar.gz.

File metadata

  • Download URL: finetune_eval-0.6.0.dev1.tar.gz
  • Upload date:
  • Size: 44.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.7.16

File hashes

Hashes for finetune_eval-0.6.0.dev1.tar.gz
Algorithm Hash digest
SHA256 899e21400577fefe00dd856b502a12644766d92ca221e7dec6184a39b3fe65c7
MD5 41cd992c22c25399278d97d2006217a1
BLAKE2b-256 415759a49286049de9387fb04d17157cb6fd484aceca0d1089ab0803f33d30e6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page