Skip to main content

Sequential MAppers for Sequences of HEterogeneous Dictionaries

Project description

SMASHED

Sequential MAppers for Sequences of HEterogeneous Dictionaries is a set of Python interfaces designed to apply transformations to samples in datasets, which are often implemented as sequences of dictionaries.

Example of Usage

Mappers are initialized and then applied sequentially. In the following example, we create a mapper that is applied to a samples, each containing a sequence of strings. The mappers are responsible for the following operations.

  1. Tokenize each sequence, cropping it to a maximum length if necessary.
  2. Stride sequences together to a maximum length or number of samples.
  3. Add padding symbols to sequences and attention masks.
  4. Concatenate all sequences from a stride into a single sequence.
import transformers
from smashed.interfaces.simple import (
    Dataset,
    TokenizerMapper,
    MultiSequenceStriderMapper,
    TokensSequencesPaddingMapper,
    AttentionMaskSequencePaddingMapper,
    SequencesConcatenateMapper,
)

tokenizer = transformers.AutoTokenizer.from_pretrained(
    pretrained_model_name_or_path='bert-base-uncased',
)

mappers = [
    TokenizerMapper(
        input_field='sentences',
        tokenizer=tokenizer,
        add_special_tokens=False,
        truncation=True,
        max_length=80
    ),
    MultiSequenceStriderMapper(
        max_stride_count=2,
        max_length=512,
        tokenizer=tokenizer,
        length_reference_field='input_ids'
    ),
    TokensSequencesPaddingMapper(
        tokenizer=tokenizer,
        input_field='input_ids'
    ),
    AttentionMaskSequencePaddingMapper(
        tokenizer=tokenizer,
        input_field='attention_mask'
    ),
    SequencesConcatenateMapper()
]

dataset = Dataset([
    {
        'sentences': [
            'This is a sentence.',
            'This is another sentence.',
            'Together, they make a paragraph.',
        ]
    },
    {
        'sentences': [
            'This sentence belongs to another sample',
            'Overall, the dataset is made of multiple samples.',
            'Each sample is made of multiple sentences.',
            'Samples might have a different number of sentences.',
            'And that is the story!',
        ]
    }
])

for mapper in mappers:
    dataset = mapper.map(dataset)

print(len(dataset))

# >>> 5

print(dataset[0])

# >>> {
#    'input_ids': [
#        101,
#        2023,
#        2003,
#        1037,
#        6251,
#        1012,
#        102,
#        2023,
#        2003,
#        2178,
#        6251,
#        1012,
#        102
#    ],
#    'attention_mask': [
#        1,
#        1,
#        1,
#        1,
#        1,
#        1,
#        1,
#        1,
#        1,
#        1,
#        1,
#        1,
#        1
#    ]
# }

Dataset Interfaces Available

The initial version of SMASHED supports two interfaces for dataset:

  1. interfaces.simple.Dataset: A simple dataset representation that is just a list of python dictionaries with some extra convenience methods to make it work with SMASHED. You can crate a simple dataset by passing a list of dictionaries to interfaces.simple.Dataset.
  2. HuggingFace datasets library. SMASHED mappers work with any datasets from HuggingFace, whether it is a regular or iterable dataset.

Developing SMASHED

To contribute to SMASHED, make sure to:

  1. (If you are not part of AI2) Fork the repository on GitHub.
  2. Clone it locally.
  3. Create a new branch in for the new feature.
  4. Install development dependencies with pip install dev-requirements.txt.
  5. Add your new mapper or feature.
  6. Add unit tests.
  7. Run tests, linting, and type checking from the root directory of the repo:
    1. Style: flake8 . (Should return no error)
    2. Style: black . (Should format for you)
    3. Style: isort . (Should sort imports for you)
    4. Static type check: mypy . (Should return no error)
    5. Tests: pytest -v --color=yes tests/ (Should return no error)
  8. Commit, push, and create a pull request.
  9. Tag soldni to review the PR.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

smashed-0.1.4.2.tar.gz (22.0 kB view details)

Uploaded Source

Built Distribution

smashed-0.1.4.2-py3-none-any.whl (25.2 kB view details)

Uploaded Python 3

File details

Details for the file smashed-0.1.4.2.tar.gz.

File metadata

  • Download URL: smashed-0.1.4.2.tar.gz
  • Upload date:
  • Size: 22.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.13

File hashes

Hashes for smashed-0.1.4.2.tar.gz
Algorithm Hash digest
SHA256 07a39b01d3a2279a9525c9b1b4414b99e0254d7a463928756544d18c5beda70f
MD5 654bc97d90da2ba54b949846286f10d0
BLAKE2b-256 bbb2eb265693f0a1740919a31f6014037543eaba563d26ec8ed15478b3d31aa7

See more details on using hashes here.

File details

Details for the file smashed-0.1.4.2-py3-none-any.whl.

File metadata

  • Download URL: smashed-0.1.4.2-py3-none-any.whl
  • Upload date:
  • Size: 25.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.13

File hashes

Hashes for smashed-0.1.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4cd50af6a31fe2deefa4a477a0f5385a8c1844c60f49c82d3cb15c3c60e6e99b
MD5 711b6268ec35c9a2fb340a5f9ee80564
BLAKE2b-256 70ab0de8981d3d04dbc3e65489d0b24224c38b35b6bd89de43789af808553d40

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page