Analyzing priming effects in a few shot setting environment
Project description
Analyzing Priming Effect in Prompt-based learning
How does priming affect prompt-based learing? This project aims at analyzing this effect in stance classification. We train a stance classifier on the ibm stance classification dataset by fine-tuning a GPT-2 model with a prompt and analzing how does the selection of the few shots used in the prompt affect the performance of the model. Our main assumption is that the examples chosen should be chosen in a diverse manner with regard topic.
- To evaluate the prompt-fine-tuning, run the following command
- Hyperparamter optimization
python scripts/run_prompt_fine_tuning.py --validate --optimize
- Best Hyperparameters
python scripts/run_prompt_fine_tuning.py --validate --optimize
- To evaluate the in-context (prompt) setup run
python scripts/run_prompting.py --validate --optimize
- To evaluate DeBERTa (a normal classifier) with all hyperparameters, run the following
python scripts/optimize_baseline.py
- To evaluate Alpaca in a instructional tuning model run the following:
/run_prompt_fine_tuning.py --validate --optimize --alpaca
The results of the experiments will be logged to your home directory. The parameters can be saved in config.yaml
Priming Sampling strategies
To run an experiment with a topic sampling strategy use the parameter --topic-similar
. This will retrieve
examples that are similar to each test instance. The similarity measure can be either Contextualized Topic Models
--ctm
, Sentence-Transformers --baseline
, or Constiteuncy Parse Tree Kernels using FastKassim --fkassim
.
Example,
Topic Similarity
Examples on similar or diverse topics are sampled using a topic similarity, which relies on a neural topic modeling (Contextual Topic Model). The Contextual Topic Models is fine-tuned on the validation set and the cosine similarities between all test and training instances are calculated and saved. While training the similarities can be used to apply the right sampling strategy.
- To create a topic model on the validation set, run
python scripts/run_topic_modeling.py --create-model --validate
For training on the test set, drop --validate 2) To create a baseline (lda and sentence-transformers) on the validation set, run
python scripts/run_topic_modeling.py --create-baseline --validate
For training on the test set, drop --validate 3) To evaluate the topic models and baseline, run
python scripts/run_topic_modeling.py --evaluate-model --validate
- To compute the similarities between all the validation and training arguments run the following
python scripts/run_topic_modeling.py --compute-similarity --validate
To load similarities from the code you can use the
similarities = load_similarities("ibmsc", "validation")
which returns a numpy matrix of dimension two that has on the first dimension the validation instances and on the second dimension the indices of the training set.
To find similar or diverse arguments for an argument in the validation or test set, you can use
examples = sample_diverse(test_istance_index, similarities, df_training, few_shot_size)
similar can be used for sample_similar
examples = sample_similar(test_istance_index, similarities, df_training, few_shot_size)
Notice that you have to load the test and training instances in the same order as the one used for training the topic model.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for few_shot_priming-0.3.17-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | aa009e5016bd8dec481aa423eefe7842777102cfa69f49c808bf073e5fd3d98b |
|
MD5 | 34852875041d1056fd3b964366288ad5 |
|
BLAKE2b-256 | 89610233f2de78eb9dd147407420353a5b539265004db234144fb8c46686f3d1 |