Skip to main content

Ultralight and Fast wrapper for the independent implementation of SPLADE++ models for your search & retrieval pipelines. Models and Library created by Prithivi Da, For PRs and Collaboration to checkout the readme.

Project description

SPLADERunner

1. What is it?

Title is dedicated to the Original Blade Runners - Harrison Ford and the Author Philip K. Dick of "Do Androids Dream of Electric Sheep?"

A Ultra-lite & Super-fast Python wrapper for the independent implementation of SPLADE++ models for your search & retrieval pipelines. Based on the papers Naver's From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective and Google's SparseEmbed

  • Lite weight:
    • No Torch or Transformers needed.
    • Runs on CPU for query or passage expansion.
    • FLOPS & Retrieval Efficient: Refer model card for details.

🚀 Installation:

pip install spladerunner

Usage:

# One-time only init
from spladerunner import Expander
expander = Expander('Splade_PP_en_v1', 128) #pass model, max_seq_len

# Sample passage expansion
sparse_rep = expander.expand("The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science.")


# For solr or elastic or vanilla lucene stores.
sparse_rep = expander.expand("The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science.", outformat="lucene")

print(sparse_rep)

(Feel free to skip to 3 If you are expert in sparse and dense representations)

2. Why Sparse Representations?

  • Lexical search with BOW based sparse vectors are strong baselines, but they famously suffer from vocabulary mismatch problem, as they can only do exact term matching.

Pros

✅ Efficient and Cheap.
✅ No need to fine-tune models.
✅️ Interpretable.
✅️ Exact Term Matches.

Cons

❌ Vocabulary mismatch (Need to remember exact terms)
  • Semantic Search Learned Neural / Dense retrievers with approximate nearest neighbors search has shown impressive results but they can

Pros

✅ Search how humans innately think.
✅ When finetuned beats sparse by long way.
✅ Easily works with Multiple modals.

Cons

❌ Suffers token amnesia (misses term matching), 
❌ Resource intensive (both index & retreival), 
❌ Famously hard to interpret.
❌ Needs fine-tuning for OOD data.
  • Getting pros of both searches made sense and that gave rise to interest in learning sparse representations for queries and documents with some interpretability. The sparse representations also double as implicit or explicit (latent, contextualized) expansion mechanisms for both query and documents. If you are new to query expansion learn more here from Daniel Tunkelang.

2a. What the Models learn?

  • The model learns to project it's learned dense representations over a MLM head to give a vocabulary distribution.

3. 💸 Why SPLADERunner?:

  • $ Concious: Serverless deployments like Lambda are charged by memory & time per invocation
  • Smaller package size = shorter cold start times, quicker re-deployments for Serverless.

4. 🎯 Models:

4a. 💸 Where and How can you use?

  • [TBD]

4b. How (and what) to contribute?

  • [TBD]

5. Criticisms and Competitions to SPLADE and Learned Sparse representations:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

SPLADERunner-0.1.6.tar.gz (5.9 kB view details)

Uploaded Source

Built Distribution

SPLADERunner-0.1.6-py3-none-any.whl (6.2 kB view details)

Uploaded Python 3

File details

Details for the file SPLADERunner-0.1.6.tar.gz.

File metadata

  • Download URL: SPLADERunner-0.1.6.tar.gz
  • Upload date:
  • Size: 5.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for SPLADERunner-0.1.6.tar.gz
Algorithm Hash digest
SHA256 c81d9b743d18fa2a1349658a751708d140e1d113da2f9e62bb901008a304f554
MD5 2a5233eacfaed1b2519eaf575de3cbe3
BLAKE2b-256 2239cfc0c3f3319ee45e96d87c36ef1bd0ae5aa17aa5000628669b4aed72b30e

See more details on using hashes here.

File details

Details for the file SPLADERunner-0.1.6-py3-none-any.whl.

File metadata

File hashes

Hashes for SPLADERunner-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 826517c3502cbffd6a9591982ae81e94fda43d2a35d5d44e65b0306e40d1e4d3
MD5 fce65d1e5194117550f0bb70a679531c
BLAKE2b-256 0c8760ce4155e88239f6b6bfc401a1f2cf8cd4c646e5c3c696f00bb0d341a76a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page