FactScore (Fine-grained atomic evaluation of factual precision in long form text generation) computing package.
Project description
FactScoreLite
FactScoreLite is an implementation of the FactScore metric, designed for detailed accuracy assessment in text generation. This package builds upon the framework provided by the original FactScore repository, which is no longer maintained and contains outdated functions.
Our development aims to address these shortcomings by updating the code and ensuring compatibility with current technologies. This makes FactScoreLite a reliable tool for anyone looking to evaluate textual accuracy in a minimal set up.
Get Started
Since the project is using OpenAI APIs, make sure that you have set up the API key before running any code. For instructions refer to OpenAI documentation.
Installing
You can install this package using pip:
pip install factscorelite
or you can install it directly by cloning and installing:
git clone https://github.com/armingh2000/FactScoreLite.git
cd FactSoreLite
pip install .
Components
The package contains three main components:
- AtomicFactGenerator: generating facts for a text.
# atomic_facts.py
class AtomicFactGenerator:
def run(self, text: str) -> list:
- FactScorer: scoring facts of a text based on a knowledge source.
# fact_scorer.py
class FactScorer:
def get_score(self, facts: list, knowledge_source: str) -> list
- FactScore:
- Generating the facts for a text.
- Scoring the facts based on a knowledge source.
- Dumping the results and GPT outputs to a local json file.
# factscore.py
class FactScore:
def get_factscore(
self,
generations: list,
knowledge_sources: list,
) -> tuple
Usage
Extract, score, dump
To extract facts of a text and score them based on the input knowledge source and dump the results:
from FactScoreLite import FactScore
# generations = a list of texts you want to calculate FactScore for
# knowledge_sources = a list of texts that the generations were created from
scores, init_scores = FactScore.get_factscore(generations, knowledge_sources)
Extract
To only extract the facts from a text (without scoring/dumping):
from FactScoreLite import AtomicFactGenerator
facts = AtomicFactGenerator.run(text)
Score
To only score the facts of a generation according to a knowledge source (wihtout dumping):
from FactScoreLite import FactScorer
scores = FactScorer.get_scores(facts, knowledge_sources)
Fact Extraction Prompt Engineering
To instruct GPT on how to break each sentence into facts, we have included examples (demonstrations, i.e., demons) that is contained in the prompt. These demons are currently for the vehicle domain. However, you might want to create your own domain specific demons. To do this, you can use GPT to create demons based on your requirements. We prompted GPT with instructions on how to generate the demons required for the vehicle domain. However, you can alter it based on your needs.
Once you have your own demons.json file, you can include it in the program by setting the correct config:
import FactScoreLite
FactScoreLite.configs.atomic_facts_demons_path = "/path/to/your/json/file"
# rest of your code
Facts Extraction Prompt
The prompt used for extracting facts from a sentence:
# atomic_facts.py
Please breakdown the following sentence into independent facts:
Sentence:
demon1_sentence
Independent Facts:
- demon1_fact1
- demon1_fact2
- demon1_fact3
Sentence:
demon2_sentence
Independent Facts:
- demon2_fact1
- demon2_fact2
Sentence:
target_sentence
Independent Facts:
Facts Scoring Prompt Engineering
We also use example demonstrations for scoring instructions prompt. The file contains one positive and multiple negative examples. In each prompt, the positive example in addition to a randomly selected negative prompt is added so that GPT performs better and more accurately. The file also contains reasons for each assignment; However, they are not used in the prompt generation but is a good way of improving the accuracy of GPT on scoring in the future.
You can also set your own domain-specific examples for the run by running the following:
import FactScoreLite
FactScoreLite.configs.fact_scorer_demons_path = "/path/to/your/json/file"
# rest of your code
Fact Scoring Prompt
The following prompt template is used to instruct GPT for scoring facts:
# fact_scorer.py
Instruction:
Only consider the statement true if it can be directly verified by the information in the context. If the information in the statement cannot be found in the context or differs from it, label it as false.
Context:
knw 1
Statement:
fact 1 True or False?
Output:
True
Context:
knw 2
Statement:
fact 2 True or False?
Output:
False
Context:
target_knowledge_source
Statement:
target_fact True or False?
Output:
Running the Tests
If you want to change the source code for your use cases, you can check whether the change conflicts with other parts of the projcet by simply running the tests:
FactScoreLite/
pytest
Contributing
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
Versioning
We use SemVer for versioning. For the versions available, see CHANGELOG.md.
License
This project is licensed under the MIT License - see the LICENSE.md file for details
Acknowledgments
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file factscorelite-1.3.0.tar.gz
.
File metadata
- Download URL: factscorelite-1.3.0.tar.gz
- Upload date:
- Size: 19.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.9.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 974435a82a2931d80bd69be284f67537e27055cdb06c1f9a6f6a9bf9ec664d68 |
|
MD5 | bcad7b716b3ac1254e03425d9ff79beb |
|
BLAKE2b-256 | 2e3c1eb52453308040ede692297e3c3fa513b5fb0f9956156372793ca7d49e75 |
File details
Details for the file FactScoreLite-1.3.0-py3-none-any.whl
.
File metadata
- Download URL: FactScoreLite-1.3.0-py3-none-any.whl
- Upload date:
- Size: 24.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.9.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0185f94db0f70a9435533ca7ce9be760b9823f49aa0c304af6407114836369d1 |
|
MD5 | 258bd9b62b9b9f12692ab38838ca8db8 |
|
BLAKE2b-256 | a942e9d855a5e1c209a9ea613e5d3487095b17849739c1c62404a392526f7709 |