Skip to main content

A Dynamically Extensible Corpus Analysis Framework

Project description

DECAF Logo

DECAF: A Dynamically Extensible Corpus Analysis Framework

DECAF is an open-source Python framework that enables fine-grained linguistic analyses and filtering of existing datasets for generating targeted training interventions for LM generalization research.

Getting Started

For basic analyses and filtering, DECAF can be installed without any external dependencies:

pip install decaffinate

For importing datasets, and for more advanced analyses, please install the package with external dependencies:

pip install decaffinate[full]

For getting a quick overview of DECAF's core functionalities, we recommend taking a look at the demo notebook.

Building an Index

Rather than creating new resources for each experiment, DECAF builds indices over datasets with existing linguistic annotations, and leverages them to analyze, filter, and generate highly controlled and reproducible experimental settings targeting specific research questions. It maintains extensibility by constructing separate indices over raw text (literals) and annotations (structures).

Indexing is specific to each dataset format, so please refer to the import documentation for details. In general, the import scripts follow the simple structure:

script/import/format.py \
	--input /path/to/data.txt \
	--output /path/to/index

After having built the index, you can query it using the DecafIndex class:

from decaf import DecafIndex

di = DecafIndex('/path/to/index')
literals = di.get_literal_counts()
structures = di.get_structure_counts()

Building a Filter

DECAF treats indices and filters as independent entities from the original corpus. This means, that indices can be continually extended with new annotation layers, and that filters can be transferred across datasets.

Filters are constructed using the Filter class, which contains Constraint objects, which in turn contain a Condition sequence.

Filter([
  Criterion([
    Condition(
      stype='type1',
      values=['label1'],
      literals=['form1']
    ),
    Condition(
      stype='type2',
      values=['label2.1', 'label2.2'],
      literals=['form2']
    )],
    operation='AND'
	)],
	sequential=True,
	hierarchy=['sentence', 'token']
)

A Condition specified what to match at the structure level, i.e., the structure type, its value (if any), and specific surface forms (if any). Within a Criterion, multiple conditions, or nested criteria, can be combined using boolean operations. Finally, the top-level criteria or wrapped in a Filter, which can enforce whether the criteria can occur anywhere, or in a direct sequence. If the index contains hierarchical information, we can further enforce that the criteria must apply within a certain hierarchical level, e.g., token annotations occurring within a self-contained sentence.

Analyzing an Index

Once an index is built, we can analyze its statistics using general, as well as filter-specific queries, e.g.:

di = DecafIndex('/path/to/index')

# get the database sizes
num_literals, num_structures, num_hierarchies = di.get_size()

# get the frequency of each literal
literals = di.get_literal_counts()

# get the frequency of each structure
structures = di.get_structure_counts()
total_of_type = di.get_structure_counts(types=['type'])
type_value_counts = di.get_structure_counts(types=['token'], values=True)
type_value_form_counts = di.get_structure_counts(types=['token'], values=True, literals=True)

# get the co-occurence across two filters
cooccurrence = di.get_cooccurrence(
  source_filter=df,
  target_filter=df
)

Exporting Data

DECAF supports exporting filtered versions of the original data, either by keeping only the matched structures, or alternatively, by masking them out.

# exporting filter results
outputs = di.filter(constraint=df)
# masking filter results
outputs = di.mask(constraint=df)

By default, this will return/mask any structure that is matched. However, sometimes, we want to be more precise and remove structures, that are matched within a hierarchical constraint (e.g., relative clauses within their main clause). In these cases, we specify an output_level, which differs from the matched structure itself:

# exporting filter results
outputs = di.filter(
  constraint=df,
  output_level='substructure'
)
# masking filter results
outputs = di.mask(
  constraint=df,
  output_level='substructure'
)

Sharing

DECAF indices can be easily shared, as they are self-contained within their respective directories. Simply zip them up, and publish.

Similarly, filters are transferable across datasets, since they query the underlying, unified index, instead of the original corpus itself.

We provide some example experiments/, and highly encourage everyone to share their DECAF experiments, as well! ☕️

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

decaffinate-0.2.tar.gz (23.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

decaffinate-0.2-py3-none-any.whl (26.0 kB view details)

Uploaded Python 3

File details

Details for the file decaffinate-0.2.tar.gz.

File metadata

  • Download URL: decaffinate-0.2.tar.gz
  • Upload date:
  • Size: 23.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for decaffinate-0.2.tar.gz
Algorithm Hash digest
SHA256 c545d879504d68d4ec3a8b5626ae3e08a8e58c3ffbae3529a784f31cfc8d70d8
MD5 b8c1b38f6bece5fcb17c293a73ae6b6b
BLAKE2b-256 60a214b3c46df2df8e4ef395c1fd6e998aaee1a3c7564a662cab787746e2662a

See more details on using hashes here.

File details

Details for the file decaffinate-0.2-py3-none-any.whl.

File metadata

  • Download URL: decaffinate-0.2-py3-none-any.whl
  • Upload date:
  • Size: 26.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for decaffinate-0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4e603aad83cb74ed6abea391ae52119d665c9eb8ae133bc740d2e400f6a206cc
MD5 379f7b7547edb582f64fc479e670f602
BLAKE2b-256 b749d03729dc7222255b5be26a72b3fa06da26d05895a6425bdea79c2cf63a79

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page