Skip to main content

A library for detecting verbatim-duplicated contents within a vast amount of documents

Project description

D3lta

If you like our project, please give us a star ⭐ on GitHub for the latest update.

arXiv

This repository is the official implementation of D3lta, a library for detecting duplicate verbatim contents within a vast amount of documents.

It distinguishes 3 types of duplicate contents : copypasta (almost exact duplicates), rewording and translation. You can run it on CPU.


💻 Installing

Clone the repository

git clone https://github.com/VIGINUM-FR/D3lta

Navigate to the project

cd D3lta

Install the package

pip install -e .

🚀 Quick start

One can use directly semantic_faiss function from a Dataframe that contains texts. We use by default the embeddings from the Universal Sentence Encoder but one can use other models to calculate embeddings.

import pandas as pd
from d3lta.faissd3lta import *

examples_dataset = [
        "Je m'apelle Mimie et je fais du stop",
        "Je m'apelle Giselle et toi ?",
        "Les chats sont gris",
        "Cat's are grey, aren't they ?",
        "Cats are grey",
        "Les chats ne sont pas gris",
    ]
df = pd.DataFrame(examples_dataset, columns=["text_language_detect"])
df.index = df.index.astype(str)

matches, df_clusters = semantic_faiss(
    df=df.rename(columns={"text_language_detect": "original"}),
    min_size_txt=10,
    embeddings_to_save='myembeddings',
    threshold_grapheme=0.693,
    threshold_language=0.715,
    threshold_semantic=0.85,
)

>>>matches

  source target     score duplicates language_source           text_to_embed_source  text_grapheme_source language_target           text_to_embed_target   text_grapheme_target     dup_type  score_lev
0      2      3  0.745741        2-3              fr            Les chats sont gris      leschatssontgris              en  Cat's are grey, aren't they ?   catsaregreyarentthey  translation        NaN
1      2      4  0.955517        2-4              fr            Les chats sont gris      leschatssontgris              en                  Cats are grey            catsaregrey  translation        NaN
2      2      5  0.808805        2-5              fr            Les chats sont gris      leschatssontgris              fr     Les chats ne sont pas gris  leschatsnesontpasgris   copy-pasta   0.761905
5      3      5  0.833525        3-5              en  Cat's are grey, aren't they ?  catsaregreyarentthey              fr     Les chats ne sont pas gris  leschatsnesontpasgris  translation        NaN
8      4      5  0.767601        4-5              en                  Cats are grey           catsaregrey              fr     Les chats ne sont pas gris  leschatsnesontpasgris  translation        NaN

>>>df_clusters
                               original language                 text_grapheme                         text_to_embed                  text_language_detect  cluster
0  Je m'apelle Mimie et je fais du stop       fr  jemapellemimieetjefaisdustop  Je m'apelle Mimie et je fais du stop  Je m'apelle Mimie et je fais du stop      NaN
1          Je m'apelle Giselle et toi ?       fr         jemapellegiselleettoi          Je m'apelle Giselle et toi ?          Je m'apelle Giselle et toi ?      NaN
2                   Les chats sont gris       fr              leschatssontgris                   Les chats sont gris                   Les chats sont gris      0.0
3         Cat's are grey, aren't they ?       en          catsaregreyarentthey         Cat's are grey, aren't they ?         Cat's are grey, aren't they ?      0.0
4                         Cats are grey       en                   catsaregrey                         Cats are grey                         Cats are grey      0.0
5            Les chats ne sont pas gris       fr         leschatsnesontpasgris            Les chats ne sont pas gris            Les chats ne sont pas gris      0.0

Its also possible to use Faiss to find similar embeddings.

import pandas as pd
from d3lta.faissd3lta import *

examples_dataset = [
        "Je m'apelle Mimie et je fais du stop",
        "Je m'apelle Giselle et toi ?",
        "Les chats sont gris",
        "Cat's are grey, aren't they ?",
        "Cats are grey",
        "Les chats ne sont pas gris",
    ]
    
df_test = pd.DataFrame(
        examples_dataset,
        columns=["text_to_embed"],
        index=range(len(examples_dataset)),
    )  # index for checking that it has good ids
 df_emb = compute_embeddings(df_test)
 index_t = create_index_cosine(df_emb)

 test_dataset = pd.DataFrame([{"text_to_embed": "I gatti sono grigi"}])
 df_emb_test = compute_embeddings(test_dataset)

 limits, distances, indices = index_t.range_search(
     x=df_emb_test.to_numpy().reshape(1, -1), thresh=0.7
 )

>>>df_test.loc[indices]["text_to_embed"]

2              Les chats sont gris
3    Cat's are grey, aren't they ?
4                    Cats are grey
5       Les chats ne sont pas gris
Name: text_to_embed, dtype: object

It is also possible to use your own embedding (other than Universal Sentence Encoder). For example:

import pandas as pd
from sentence_transformers import SentenceTransformer
from d3lta.faissd3lta import *

examples_dataset = [
        "Je m'apelle Mimie et je fais du stop",
        "Je m'apelle Giselle et toi ?",
        "Les chats sont gris",
        "Cat's are grey, aren't they ?",
        "Cats are grey",
        "Les chats ne sont pas gris",
    ]
df = pd.DataFrame(examples_dataset, columns=["text_language_detect"])
df.index = df.index.astype(str)

model = SentenceTransformer('paraphrase-multilingual-mpnet-base-v2')
new_emb = model.encode(df['text_language_detect'].values.tolist())
df_emb = pd.DataFrame(new_emb, index=df.index)

matches, df_clusters = semantic_faiss(
    df=df.rename(columns={"text_language_detect": "original"}),
    min_size_txt=10,
    df_embeddings_use=df_emb,
    threshold_grapheme=0.693,
    threshold_language=0.715,
    threshold_semantic=0.85,
)

matches

📚 Synthetic dataset

The dataset is available in the release 1.0.0. It contains the following files:

synthetic_dataset_documents.csv:

This file contains all seeds (real and original texts) and their generated variations (copy-pasta, rewording or translations). There are 2985 documents in this corpus dataset generated using a large language model.

Columns details:

  • doc_id (int): unique number associated to each text. All seed index are multiples of 10 and followed by their 9 transformations.
  • original (str): real or transformed text
  • text_type (str): dataset where the seed was extracted (books, news, tweets)
  • language (str): language of the text
  • prompt (str): prompt given to ChatGPT for "copypasta" and "rewording"
  • seed (bool): True if the text is one of the 300 initial texts from which the generation is from

The 300 initial texts (seeds) have been taken from three Kaggle datasets :

(For more info, please refer to the paper)

synthetic_dataset_pairs_unbalanced.csv:

This file contains the 1,497,547 annotated pairs of text of the synthetic dataset : 4,500 pairs of translation, 4,030 pairs of copy-pasta, 4017 pairs of rewording and 1,485,000 pairs of non duplicated content called "nomatch".

Column details:

  • source_target (str): unique id for the pair.
  • source (int): index of the first text of the pair in the synthetic_dataset_documents.csv
  • target (int): index of the second text of the pair in the synthetic_dataset_documents.csv
  • original_source (str): text of the source index
  • original_target (str): text of the target index
  • language_source (str): language of original_source
  • language_target (str): language of original_target
  • true_label (str): transformation relation that links both text of the pair i.e. the source and target texts are {true_label} of each other. The true_label can be "copypasta", "rewording" or "translation"

Notebooks

In folder the notebooks, you can find:

Citation

If you find our paper and code useful in your research, please consider giving a star 🌟 and a citation 📝:

@misc{richard2023unmasking,
      title={Unmasking information manipulation: A quantitative approach to detecting Copy-pasta, Rewording, and Translation on Social Media}, 
      author={Manon Richard and Lisa Giordani and Cristian Brokate and Jean Liénard},
      year={2023},
      eprint={2312.17338},
      archivePrefix={arXiv},
      primaryClass={cs.SI},
      url={https://arxiv.org/abs/2312.17338}, 
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

d3lta-1.0.1.tar.gz (14.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

d3lta-1.0.1-py3-none-any.whl (12.8 kB view details)

Uploaded Python 3

File details

Details for the file d3lta-1.0.1.tar.gz.

File metadata

  • Download URL: d3lta-1.0.1.tar.gz
  • Upload date:
  • Size: 14.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for d3lta-1.0.1.tar.gz
Algorithm Hash digest
SHA256 ee2dbebfcd44351fdd3a38c85198b7e38110320573411da96922236f1b2c5122
MD5 8cfe4020bb522191c3a01f6a86ef3b45
BLAKE2b-256 38f365ef375029b69969f6f02758a3a90e88dd95af267ee464f18043f0d28b49

See more details on using hashes here.

Provenance

The following attestation bundles were made for d3lta-1.0.1.tar.gz:

Publisher: publish-to-pypi.yml on VIGINUM-FR/D3lta

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file d3lta-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: d3lta-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 12.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for d3lta-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d198d5cdc98b8c4cd367a321c945ce2475ae6df9dc8177e8dbec7007caa7e1fa
MD5 2d743889b13de426bb0469604fb49d2b
BLAKE2b-256 4763863f105c00b053519745f0ee0a07bcde6b243f05cc72e7d9692715cafd63

See more details on using hashes here.

Provenance

The following attestation bundles were made for d3lta-1.0.1-py3-none-any.whl:

Publisher: publish-to-pypi.yml on VIGINUM-FR/D3lta

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page