Skip to main content

A Dynamically Extensible Corpus Analysis Framework

Project description

DECAF Logo

DECAF: A Dynamically Extensible Corpus Analysis Framework

DECAF is an open-source Python framework that enables fine-grained linguistic analyses and filtering of existing datasets for generating targeted training interventions for LM generalization research.

Getting Started

For basic analyses and filtering, DECAF can be installed without any external dependencies:

pip install decaffinate

For importing datasets, and for more advanced analyses, please install the package with external dependencies:

pip install decaffinate[full]

For getting a quick overview of DECAF's core functionalities, we recommend taking a look at the demo notebook.

Building an Index

Rather than creating new resources for each experiment, DECAF builds indices over datasets with existing linguistic annotations, and leverages them to analyze, filter, and generate highly controlled and reproducible experimental settings targeting specific research questions. It maintains extensibility by constructing separate indices over raw text (literals) and annotations (structures).

Indexing is specific to each dataset format, so please refer to the import documentation for details. In general, the import scripts follow the simple structure:

script/import/format.py \
	--input /path/to/data.txt \
	--output /path/to/index

After having built the index, you can query it using the DecafIndex class:

from decaf import DecafIndex

di = DecafIndex('/path/to/index')
literals = di.get_literal_counts()
structures = di.get_structure_counts()

Building a Filter

DECAF treats indices and filters as independent entities from the original corpus. This means, that indices can be continually extended with new annotation layers, and that filters can be transferred across datasets.

Filters are constructed using the Filter class, which contains Constraint objects, which in turn contain a Condition sequence.

Filter([
  Criterion([
    Condition(
      stype='type1',
      values=['label1'],
      literals=['form1']
    ),
    Condition(
      stype='type2',
      values=['label2.1', 'label2.2'],
      literals=['form2']
    )],
    operation='AND'
	)],
	sequential=True,
	hierarchy=['sentence', 'token']
)

A Condition specified what to match at the structure level, i.e., the structure type, its value (if any), and specific surface forms (if any). Within a Criterion, multiple conditions, or nested criteria, can be combined using boolean operations. Finally, the top-level criteria or wrapped in a Filter, which can enforce whether the criteria can occur anywhere, or in a direct sequence. If the index contains hierarchical information, we can further enforce that the criteria must apply within a certain hierarchical level, e.g., token annotations occurring within a self-contained sentence.

Analyzing an Index

Once an index is built, we can analyze its statistics using general, as well as filter-specific queries, e.g.:

di = DecafIndex('/path/to/index')

# get the database sizes
num_literals, num_structures, num_hierarchies = di.get_size()

# get the frequency of each literal
literals = di.get_literal_counts()

# get the frequency of each structure
structures = di.get_structure_counts()
total_of_type = di.get_structure_counts(types=['type'])
type_value_counts = di.get_structure_counts(types=['token'], values=True)
type_value_form_counts = di.get_structure_counts(types=['token'], values=True, literals=True)

# get the co-occurence across two filters
cooccurrence = di.get_cooccurrence(
  source_filter=df,
  target_filter=df
)

Exporting Data

DECAF supports exporting filtered versions of the original data, either by keeping only the matched structures, or alternatively, by masking them out.

# exporting filter results
outputs = di.filter(constraint=df)
# masking filter results
outputs = di.mask(constraint=df)

By default, this will return/mask any structure that is matched. However, sometimes, we want to be more precise and remove structures, that are matched within a hierarchical constraint (e.g., relative clauses within their main clause). In these cases, we specify an output_level, which differs from the matched structure itself:

# exporting filter results
outputs = di.filter(
  constraint=df,
  output_level='substructure'
)
# masking filter results
outputs = di.mask(
  constraint=df,
  output_level='substructure'
)

Sharing

DECAF indices can be easily shared, as they are self-contained within their respective directories. Simply zip them up, and publish.

Similarly, filters are transferable across datasets, since they query the underlying, unified index, instead of the original corpus itself.

We provide some example experiments/, and highly encourage everyone to share their DECAF experiments, as well! ☕️

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

decaffinate-0.1.tar.gz (20.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

decaffinate-0.1-py3-none-any.whl (22.1 kB view details)

Uploaded Python 3

File details

Details for the file decaffinate-0.1.tar.gz.

File metadata

  • Download URL: decaffinate-0.1.tar.gz
  • Upload date:
  • Size: 20.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for decaffinate-0.1.tar.gz
Algorithm Hash digest
SHA256 4c825d5e89136aaadec12c2bad260fd02dbd3f29e254a12f149e8e0af9f391bc
MD5 426a21b187ea79454aa902fd2e0658dd
BLAKE2b-256 89b667718e133d287cfccfa788b83fa4725768771d12d10b1fcacd9c3de5998b

See more details on using hashes here.

File details

Details for the file decaffinate-0.1-py3-none-any.whl.

File metadata

  • Download URL: decaffinate-0.1-py3-none-any.whl
  • Upload date:
  • Size: 22.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for decaffinate-0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1741e5f9f5262b6f18c2014c1f5c707138388316af0a9583af7a23ded561d03b
MD5 efadbb37a14898c1f9842eaf7b880683
BLAKE2b-256 6eaa8695bb546de259d4202e8b1c9aee7ccba5a19c70bd9d004bb177b00bac6a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page