Skip to main content

Analyzing priming effects in a few shot setting environment

Project description

Analyzing Priming Effect in Prompt-based learning

How does priming affect prompt-based learing? This project aims at analyzing this effect in stance classification. We train a stance classifier on the ibm stance classification dataset by fine-tuning a GPT-2 model with a prompt and analzing how does the selection of the few shots used in the prompt affect the performance of the model. Our main assumption is that the examples chosen should be chosen in a diverse manner with regard topic.

  1. To evaluate the prompt-fine-tuning, run the following command
  • Hyperparamter optimization
python scripts/run_prompt_fine_tuning.py --validate --optimize 
  • Best Hyperparameters
python scripts/run_prompt_fine_tuning.py --validate --optimize 
  1. To evaluate the in-context (prompt) setup run
python scripts/run_prompt_fine_tuning.py --validate --optimize 
  1. To evaluate DeBERTa (a normal classifier) with all hyperparameters, run the following
python scripts/optimize_baseline.py 

The results of the experiments will be logged to your home directory. The parameters can be saved in config.yaml

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

few_shot_priming-0.0.434-py3-none-any.whl (4.9 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page