Skip to main content

Attribute (or cite) statements generated by LLMs back to in-context information.

Project description

ContextCite: Attributing Model Generation to Context

ContextCite
[getting started] [example notebooks] [🤗 demo] [blog post #1] [blog post #2] [paper] [bib]
Maintainers: Ben Cohen-Wang, Harshay Shah, and Kristian Georgiev

context_cite is a tool for attributing statements generated by LLMs back to specific parts of the context.

Attributing context via ContextCite

Getting started

Install context_cite via pip

pip install context_cite

Using context_cite is as simple as:

from context_cite import ContextCiter

model_name = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
context = """
Attention Is All You Need

Abstract
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
1 Introduction
Recurrent neural networks, long short-term memory [13] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation [35, 2, 5]. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures [38, 24, 15].
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states ht, as a function of the previous hidden state ht-1 and the input for position t. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [21] and conditional computation [32], while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2, 19]. In all but a few cases [27], however, such attention mechanisms are used in conjunction with a recurrent network.
In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.
"""
query = "What type of GPUs did the authors use in this paper?"
cc = ContextCiter.from_pretrained(model_name, context, query, device="cuda")

We can check the model's response using cc.response:

In [1]: cc.response
Out[1]: 'The authors used eight P100 GPUs in their Transformer architecture for training on the WMT 2014 English-to-German translation task.</s>'

Where did the model get its information? Let's see what the attributions look like!

In [2]: cc.get_attributions(as_dataframe=True, top_k=5)
Out[2]:

Basic ContextCite example

Finally, let's try to attribute a specific part of the response. To do so, we specify a start_idx and end_idx corresponding to the range of the response that we would like to attribute. In this case, we'll specify indices to attribute the phrase "the WMT 2014 English-to-German translation task" from the response.

In [3]: cc.get_attributions(start_idx=83, end_idx=129, as_dataframe=True, top_k=5)
Out[3]:

ContextCite attributions

Example notebooks

Try out context_cite using our example notebooks (you can open them in Google colab):

Citation

@misc{cohenwang2024contextciteattributingmodelgeneration,
      title={ContextCite: Attributing Model Generation to Context}, 
      author={Benjamin Cohen-Wang and Harshay Shah and Kristian Georgiev and Aleksander Madry},
      year={2024},
      eprint={2409.00729},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2409.00729}, 
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

context_cite-0.0.2.tar.gz (17.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

context_cite-0.0.2-py3-none-any.whl (13.8 kB view details)

Uploaded Python 3

File details

Details for the file context_cite-0.0.2.tar.gz.

File metadata

  • Download URL: context_cite-0.0.2.tar.gz
  • Upload date:
  • Size: 17.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.14

File hashes

Hashes for context_cite-0.0.2.tar.gz
Algorithm Hash digest
SHA256 b6800dc13ec3319d8f425dadaea563aac4c3d53c97167c59220246e25157dcfe
MD5 b1587ab7f499b40c5c1f134f6ea01b26
BLAKE2b-256 71f3b061e94346b3410859fa1102074363001eed1c9a9b17237ce2488cdcdbd1

See more details on using hashes here.

File details

Details for the file context_cite-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: context_cite-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 13.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.14

File hashes

Hashes for context_cite-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 dd98606c537286e521c0230d40cb7d91d740e17c391bf0c6865247b400d7946a
MD5 9cd659bc63e0b5128e91f4853087324d
BLAKE2b-256 eb379a6b24bb470ef04fd39abf0ea1245fe5f24728dc0f6a057ab0c79b50a55c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page