Skip to main content

Analyzing priming effects in a few shot setting environment

Project description

Analyzing Priming Effect in Prompt-based learning

How does priming affect prompt-based learing? This project aims at analyzing this effect in stance classification. We train a stance classifier on the ibm stance classification dataset by fine-tuning a GPT-2 model with a prompt and analzing how does the selection of the few shots used in the prompt affect the performance of the model. Our main assumption is that the examples chosen should be chosen in a diverse manner with regard topic.

  1. To evaluate the prompt-fine-tuning, run the following command
  • Hyperparamter optimization
python scripts/run_prompt_fine_tuning.py --validate --optimize 
  • Best Hyperparameters
python scripts/run_prompt_fine_tuning.py --validate --optimize 
  1. To evaluate the in-context (prompt) setup run
python scripts/run_prompting.py --validate --optimize 
  1. To evaluate DeBERTa (a normal classifier) with all hyperparameters, run the following
python scripts/optimize_baseline.py 

The results of the experiments will be logged to your home directory. The parameters can be saved in config.yaml

Topic Similarity

Examples on similar or diverse topics are sampled using a topic similarity, which relies on a neural topic modeling (Contextual Topic Model). The Contextual Topic Models is fine-tuned on the validation set and the cosine similarities between all test and training instances are calculated and saved. While training the similarities can be used to apply the right sampling strategy.

  1. To create a topic model on the validation set, run
python scripts/run_topic_modeling.py --create-model --validate

For training on the test set, drop --validate 2) To create a baseline (lda and sentence-transformers) on the validation set, run

python scripts/run_topic_modeling.py --create-baseline --validate

For training on the test set, drop --validate 3) To evaluate the topic models and baseline, run

python scripts/run_topic_modeling.py --evaluate-model --validate
  1. To compute the similarities between all the validation and training arguments run the following
python scripts/run_topic_modeling.py --compute-similarity --validate

To load similarities from the code you can use the

similarities = load_similarities("ibmsc", "validation")

which returns a numpy matrix of dimension two that has on the first dimension the validation instances and on the second dimension the indices of the training set.

To find similar or diverse arguments for an argument in the validation or test set, you can use

examples = sample_diverse(test_istance_index, similarities, df_training, few_shot_size)

similar can be used for sample_similar

examples = sample_similar(test_istance_index, similarities, df_training, few_shot_size) Notice that you have to load the test and training instances in the same order as the one used for training the topic model.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

few_shot_priming-0.2.18-py3-none-any.whl (4.9 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page