🔌 Open-source plugins for with practical features for Argilla using listeners.
Project description
Argilla Plugins
🔌 Open-source plugins for extra features and workflows
Why? The design of Argilla is intentionally programmable (i.e., developers can build complex workflows for reading and updating datasets). However, there are certain workflows and features which are shared across different use cases and could be simplified from a developer experience perspective. In order to facilitate the reuse of key workflows and empower the community, Argilla Plugins provides a collection of extensions to super power your Argilla use cases. Some of this pluggable method could be eventually integrated into the core of Argilla.
Quickstart
pip install argilla-plugins
from argilla_plugins.datasets import end_of_life
plugin = end_of_life(
name="plugin-test",
end_of_life_in_seconds=100,
execution_interval_in_seconds=5,
discard_only=False
)
plugin.start()
How to develop a plugin
- Pick a cool plugin from the list of topics or our issue overview.
- Think about an abstraction for the plugin as shown below.
- Refer to the solution in the issue.
- fork the repo.
- commit your code
- open a PR.
- Keep it simple.
- Have fun.
Development requirements
Function
We want to to keep the plugins as abstract as possible, hence they have to be able to be used within 3 lines of code.
from argilla_plugins.topic import plugin
plugin(name="dataset_name", ws="workspace" query="query", interval=1.0)
plugin.start()
Variables
variables name
, ws
, and query
are supposed to be re-used as much as possible throughout all plugins. Similarly, some functions might contain adaptations like name_from
or query_from
. Whenever possible re-use variables as much as possible.
Ohh, and don`t forget to have fun! 🤓
Topics
Reporting
What is it? Create interactive reports about dataset activity, dataset features, annotation tasks, model predictions, and more.
Plugins:
- automated reporting pluging using
datapane
. issue - automated reporting pluging for
great-expectations
. issue
Datasets
What is it?
Everything that involves operations on a dataset level
, like dividing work, syncing datasets, and deduplicating records.
Plugins:
- sync data between datasets.
- remove duplicate records. issue
- create train test splits. issue
- set limits to records in datasets
End of Life
Automatically delete or discard records after x
seconds.
from argilla_plugins.datasets import end_of_life
plugin = end_of_life(
name="plugin-test",
end_of_life_in_seconds=100,
execution_interval_in_seconds=5,
discard_only=False
)
plugin.start()
Programmatic Labelling
What is it?
Automatically update annotations
and predictions
labels and predictions of records
based on heuristics.
Plugins:
- annotated spans as gazzetteer for labelling. issue
- vector search queries and similarity threshold. issue
- use gazzetteer for labelling. issue
- materialize annotations/predictions from rules using Snorkel or a MajorityVoter issue
Token Copycat
If we annotate spans for texts like NER, we are relatively certain that these spans should be annotated the same throughout the entire dataset. We could use this assumption to already start annotating or predicting previously unseen data.
from argilla_plugins import token_copycat
plugin = token_copycat(
name="plugin-test",
query=None,
copy_predictions=True,
word_dict_kb_predictions={"key": {"label": "label", "score": 0}},
copy_annotations=True,
word_dict_kb_annotations={"key": {"label": "label", "score": 0}},
included_labels=["label"],
case_sensitive=True,
execution_interval_in_seconds=1,
)
plugin.start()
Active learning
What is it? A process during which a learning algorithm can interactively query a user (or some other information source) to label new data points.
Plugins:
- active learning for
TextClassification
. - active learning for
TokenClassification
. issue
from argilla_plugins import classy_learner
plugin = classy_learner(
name="plugin-test",
query=None,
model="all-MiniLM-L6-v2",
classy_config=None,
certainty_threshold=0,
overwrite_predictions=True,
sample_strategy="fifo",
min_n_samples=6,
max_n_samples=20,
batch_size=1000,
execution_interval_in_seconds=5,
)
plugin.start()
Inference endpoints
What is it? Automatically add predictions to records as they are logged into Argilla. This can be used for making it really easy to pre-annotated a dataset with an existing model or service.
Training endpoints
What is it? Automatically train a model based on dataset annotations.
- TBD
Suggestions
Do you have any suggestions? Please open an issue 🤓
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file argilla-plugins-0.1.3.tar.gz
.
File metadata
- Download URL: argilla-plugins-0.1.3.tar.gz
- Upload date:
- Size: 16.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.2.1 CPython/3.10.9 Darwin/22.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a42cfe726fcddc8eeba7ad8e689e84de1757dfb97e110a47f53b5d1a08b88686 |
|
MD5 | d975e51f9e13a80909f746fb121c3b53 |
|
BLAKE2b-256 | 631e6061fa288868461ae9ec633992b4beae3fecd6dbe1d696d756a3756525d1 |
File details
Details for the file argilla_plugins-0.1.3-py3-none-any.whl
.
File metadata
- Download URL: argilla_plugins-0.1.3-py3-none-any.whl
- Upload date:
- Size: 18.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.2.1 CPython/3.10.9 Darwin/22.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fcb7582961726ce5077bc2de62ab8be15ad1e3b392828b25ac189500e694bbac |
|
MD5 | c2f9a3d23015d6b248fae6b0582a54c5 |
|
BLAKE2b-256 | 39583737aae5a886bd5b7c20a45a4f04d60217c23f7d54102847f71e6f1e33a2 |