Skip to main content

A system for quickly generating training data with multi-task weak supervision

Project description

Snorkel MeTaL

Build Status

v0.5.0

ANNOUNCEMENT (3/20): We are excited to have achieved a new state-of-the-art score on the GLUE Benchmark and four of its component tasks using Snorkel MeTaL. Check out the corresponding blog post for an overview of how we did it. The code we used to accomplish this was part of a significant restructuring of multi-task end models in Snorkel MeTaL to make it as easy as possible to perform Massive Multi-Task Learning (MMTL) with supervision at varying levels of granularity and over an arbitrarily large number of tasks. That mmtl package has been released as a part of Snorkel MeTaL v0.5, along with a basic tutorial. Additional tutorials showing more advanced usage (e.g., using a pre-trained BERT network as a shared input module, using multiple label sets, supervising at the token and sentence level simultaneously, etc.) will be released in future minor version updates, though such functionality is already supported.

Stay tuned on other developments in the Snorkel ecosystem at our project landing page: snorkel.stanford.edu.

Getting Started

Motivation

This project builds on Snorkel in an attempt to understand how massively multi-task supervision and learning changes the way people program. Multitask learning (MTL) is an established technique that effectively pools samples by sharing representations across related tasks, leading to better performance with less training data (for a great primer of recent advances, see this survey). However, most existing multi-task systems rely on two or three fixed, hand-labeled training sets. Instead, weak supervision opens the floodgates, allowing users to add arbitrarily many weakly-supervised tasks. We call this setting massively multitask learning, and envision models with tens or hundreds of tasks with supervision of widely varying quality. Our goal with the Snorkel MeTaL project is to understand this new regime, and the programming model it entails.

More concretely, Snorkel MeTaL is a framework for using multi-task weak supervision (MTS), provided by users in the form of labeling functions applied over unlabeled data, to train multi-task models. Snorkel MeTaL can use the output of labeling functions developed and executed in Snorkel, or take in arbitrary label matrices representing weak supervision from multiple sources of unknown quality, and then use this to train auto-compiled MTL networks.

Snorkel MeTaL uses a new matrix approximation approach to learn the accuracies of diverse sources with unknown accuracies, arbitrary dependency structures, and structured multi-task outputs. This makes it significantly more scalable than our previous approaches.

References

Blog Posts

Q&A

If you are looking for help regarding how to use a particular class or method, the best references are (in order):

  • The docstrings for that class
  • The MeTaL Commandments
  • The corresponding unit tests in tests/
  • The Issues page (We tag issues that might be particularly helpful with the "reference question" label)

Sample Usage

This sample is for a single-task problem. For a multi-task example, see tutorials/Multitask.ipynb.

"""
n = # data points
m = # labeling functions
k = cardinality of the classification task

Load for each split: 
L: an [n,m] scipy.sparse label matrix of noisy labels
Y: an n-dim numpy.ndarray of target labels
X: an n-dim iterable (e.g., a list) of end model inputs
"""

from metal.label_model import LabelModel, EndModel

# Train a label model and generate training labels
label_model = LabelModel(k)
label_model.train_model(L_train)
Y_train_probs = label_model.predict_proba(L_train)

# Train a discriminative end model with the generated labels
end_model = EndModel([1000,10,2])
end_model.train_model(train_data=(X_train, Y_train_probs), valid_data=(X_dev, Y_dev))

# Evaluate performance
score = end_model.score(data=(X_test, Y_test), metric="accuracy")

Note for Snorkel users: Snorkel MeTaL, even in the single-task case, learns a slightly different label model than Snorkel does (e.g. here we learn class-conditional accuracies for each LF, etc.)---so expect slightly different (hopefully better!) results.

Release Notes

Major changes in v0.5:

  • Introduction of Massive Multi-Task Learning (MMTL) package in metal/mmtl/ with tutorial.
  • Additional logging improvements from v0.4

Major changes in v0.4:

  • Upgrade to pytorch v1.0
  • Improved control over logging/checkpointing/validation
    • More modular code, separate Logger, Checkpointer, LogWriter classes
    • Support for user-defined metrics for validation/checkpointing
    • Logging frequency can now be based on seconds, examples, batches, or epochs
  • Naming convention change: hard (int) labels -> preds, soft (float) labels -> probs

Setup

[1] Install anaconda:
Instructions here: https://www.anaconda.com/download/

[2] Clone the repository:

git clone https://github.com/HazyResearch/metal.git
cd metal

[3] Create virtual environment:

conda env create -f environment.yml
source activate metal

[4] Run unit tests:

nosetests

If the tests run successfully, you should see 50+ dots followed by "OK".
Check out the tutorials to get familiar with the Snorkel MeTaL codebase!

Or, to use Snorkel Metal in another project, install it with pip:

pip install snorkel-metal

Developer Guidelines

First, read the MeTaL Commandments, which describe the major design principles, terminology, and style guidelines for Snorkel MeTaL.

If you are interested in contributing to Snorkel MeTaL (and we welcome whole-heartedly contributions via pull requests!), follow the setup guidelines above, then run the following additional command:

make dev

This will install a few additional tools that help to ensure that any commits or pull requests you submit conform with our established standards. We use the following packages:

  • isort: import standardization
  • black: automatic code formatting
  • flake8: PEP8 linting

After running make dev to install the necessary tools, you can run make check to see if any changes you've made violate the repo standards and make fix to fix any related to isort/black. Fixes for flake8 violations will need to be made manually.

GPU Usage

MeTaL supports GPU usage, but does not include this in automatically-run tests; to run these tests, first install the requirements in tests/gpu/requirements.txt, then run:

nosetests tests/gpu

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

snorkel-metal-0.5.0.tar.gz (104.7 kB view details)

Uploaded Source

Built Distribution

snorkel_metal-0.5.0-py3-none-any.whl (133.6 kB view details)

Uploaded Python 3

File details

Details for the file snorkel-metal-0.5.0.tar.gz.

File metadata

  • Download URL: snorkel-metal-0.5.0.tar.gz
  • Upload date:
  • Size: 104.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/41.0.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.6.7

File hashes

Hashes for snorkel-metal-0.5.0.tar.gz
Algorithm Hash digest
SHA256 f8ebac88e2417b228a4a30d3456f2dc24a16a0cd15c957364863a300ee153e78
MD5 36c440fcc1ff2a7a129b09694f2b0add
BLAKE2b-256 ce37b7e8488e6b3ec6687cb4d432f14e5ffa99d1fab33405853454c2c545e0b9

See more details on using hashes here.

File details

Details for the file snorkel_metal-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: snorkel_metal-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 133.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/41.0.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.6.7

File hashes

Hashes for snorkel_metal-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f994bd74693fefaf604ba088dfdbb9acb1b545d26f9cc8b1ff2f4e7519de935b
MD5 df0783d4947963249a73aa6047e29021
BLAKE2b-256 1cf48fdcdb895eb74cf417503b1713138f4667656ef01b40bed7e30cbebee88e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page