Skip to main content

A high-performance data structure for fast retrieval over learned sparse representations

Project description

Seismic

Seismic is a highly efficient data structure for fast retrieval over learned sparse embeddings written in Rust 🦀. Designed with scalability and performance in mind, Seismic makes querying learned sparse representations seamless.

Details on how to use Seismic's core engine in Rust 🦀 can be found in docs/RustUsage.md.

The instructions below explain how to use it by using the Python API.

⚡ Installation

The easiest way to use Seismic is via its Python API, which can be installed in two different ways:

  1. the easiest way is via pip as follows:
pip install pyseismic-lsr
  1. via Rust compilation that allows deeper hardware optimizations as follows:
RUSTFLAGS="-C target-cpu=native" pip install --no-binary :all: pyseismic-lsr

Check docs/PythonUsage.md for more details.

🚀 Quick Start

Given a collection as a jsonl file, you can quickly index it by running

from seismic import SeismicIndex

json_input_file = "" # Your data collection

index = SeismicIndex.build(json_input_file)
print("Number of documents:", index.len)
print("Avg number of non-zero components:", index.nnz / index.len)
print("Dimensionality of the vectors:", index.dim)

index.print_space_usage_byte()

and then exploit Seismic to retrieve your set of queries quickly

import numpy as np

MAX_TOKEN_LEN = 30

string_type  = f'U{MAX_TOKEN_LEN}'

query = {"a": 3.5, "certain": 3.5, "query": 0.4}
query_id = "0"
query_components = np.array(list(query.keys()), dtype=string_type)
query_values = np.array(list(query.values()), dtype=np.float32)

results = index.search(
    query_id=query_id,
    query_components=query_components,
    query_values=query_values,
    k=10, 
    query_cut=3, 
    heap_factor=0.8,
)

📥 Download the Datasets

The embeddings in jsonl format for several encoders and several datasets can be downloaded from this HuggingFace repository, together with the queries representations.

As an example, the Splade embeddings for MSMARCO can be downloaded and extracted by running the following commands.

wget https://huggingface.co/datasets/tuskanny/seismic-msmarco-splade/resolve/main/documents.tar.gz?download=true -O documents.tar.gz 

tar -xvzf documents.tar.gz

or by using the Huggingface dataset download tool.

📄 Data Format

Documents and queries should have the following format. Each line should be a JSON-formatted string with the following fields:

  • id: must represent the ID of the document as an integer.
  • content: the original content of the document, as a string. This field is optional.
  • vector: a dictionary where each key represents a token, and its corresponding value is the score, e.g., {"dog": 2.45}.

This is the standard output format of several libraries to train sparse models, such as learned-sparse-retrieval.

The script convert_json_to_inner_format.py allows converting files formatted accordingly into the seismic inner format.

python scripts/convert_json_to_inner_format.py --document-path /path/to/document.jsonl --query-path /path/to/queries.jsonl --output-dir /path/to/output 

This will generate a data directory at the /path/to/output path, with documents.bin and queries.bin binary files inside.

If you download the NQ dataset from the HuggingFace repo, you need to specify --input-format nq as it uses a slightly different format.

🪏 Resources

Check out our docs folder for detailed guides:

🏆 Best Results

Seismic is an approximate algorithm designed for high-performance retrieval over learned sparse representations. We provide pre-optimized configurations for several common datasets, e.g., MsMarco. Check the detailed documentation in docs/BestResults.md and the available optimized configurations in experiments/best_configs.

📚 Bibliography

  1. Sebastian Bruch, Franco Maria Nardini, Cosimo Rulli, and Rossano Venturini. "Efficient Inverted Indexes for Approximate Retrieval over Learned Sparse Representations." Proc. ACM SIGIR. 2024.
  2. Sebastian Bruch, Franco Maria Nardini, Cosimo Rulli, and Rossano Venturini. "Pairing Clustered Inverted Indexes with κ-NN Graphs for Fast Approximate Retrieval over Learned Sparse Representations." Proc. ACM CIKM. 2024.
  3. Sebastian Bruch, Franco Maria Nardini, Cosimo Rulli, Rossano Venturini, and Leonardo Venuta. "Investigating the Scalability of Approximate Sparse Retrieval Algorithms to Massive Datasets." Proc. ECIR. 2025.

Citation License

The source code in this repository is subject to the following citation license:

By downloading and using this software, you agree to cite the under-noted papers in any kind of material you produce where it was used to conduct a search or experimentation, whether be it a research paper, dissertation, article, poster, presentation, or documentation. By using this software, you have agreed to the citation license.

SIGIR 2024

@inproceedings{bruch2024seismic,
  author    = {Bruch, Sebastian and Nardini, Franco Maria and Rulli, Cosimo and Venturini, Rossano},
  title     = {Efficient Inverted Indexes for Approximate Retrieval over Learned Sparse Representations},
  booktitle = {Proceedings of the 47th International {ACM} {SIGIR} {C}onference on Research and Development in Information Retrieval ({SIGIR})},
  pages     = {152--162},
  publisher = {{ACM}},
  year      = {2024},
  url       = {https://doi.org/10.1145/3626772.3657769},
  doi       = {10.1145/3626772.3657769}
}

CIKM 2024

@inproceedings{bruch2024pairing,
  author    = {Bruch, Sebastian and Nardini, Franco Maria and Rulli, Cosimo and Venturini, Rossano},
  title     = {Pairing Clustered Inverted Indexes with $\kappa$-NN Graphs for Fast Approximate Retrieval over Learned Sparse Representations},
  booktitle = {Proceedings of the 33rd International {ACM} {C}onference on {I}nformation and {K}nowledge {M}anagement ({CIKM})},
  pages     = {3642--3646},
  publisher = {{ACM}},
  year      = {2024},
  url       = {https://doi.org/10.1145/3627673.3679977},
  doi       = {10.1145/3627673.3679977}
}

ECIR 2025

@inproceedings{bruch2025investigating,
  author    = {Bruch, Sebastian and Nardini, Franco Maria and Rulli, Cosimo and Venturini, Rossano and Venuta, Leonardo},
  title     = {Investigating the Scalability of Approximate Sparse Retrieval Algorithms to Massive Datasets},
  booktitle = {Advances in Information Retrieval},
  pages     = {437--445},
  publisher = {Springer Nature Switzerland},
  year      = {2025},
  url       = {https://doi.org/10.1007/978-3-031-88714-7_43},
  doi       = {10.1007/978-3-031-88714-7_43}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyseismic_lsr-0.4.3.tar.gz (1.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyseismic_lsr-0.4.3-cp310-cp310-manylinux_2_34_x86_64.whl (859.5 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.34+ x86-64

File details

Details for the file pyseismic_lsr-0.4.3.tar.gz.

File metadata

  • Download URL: pyseismic_lsr-0.4.3.tar.gz
  • Upload date:
  • Size: 1.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.16

File hashes

Hashes for pyseismic_lsr-0.4.3.tar.gz
Algorithm Hash digest
SHA256 22075643b811d85eaa6efcf0c77315bd6c948f371d16ffabfc3921c2159e2cbd
MD5 624acda9512d4f292c6dfc6e322e34a0
BLAKE2b-256 a48ff2bedbcc714da80e7ef1d9c0517077a0bd1ea13c513f08934d8b3e38c323

See more details on using hashes here.

File details

Details for the file pyseismic_lsr-0.4.3-cp310-cp310-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for pyseismic_lsr-0.4.3-cp310-cp310-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 812b4454d8b07a4c3a5ba7df83d2c786d0fc78d909034a68358b0efe3d141802
MD5 8bb2a9e95abfce54a6fe870a7332543c
BLAKE2b-256 dd0f3231969678096fc5d389235c421ff6c3b03d457bacba9675949c5bb5c98d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page