Skip to main content

A package for massive serving

Project description

Massive Serve User Guide

One command to download and serve a datastore---that's it ๐Ÿ˜Ž.

Installation

pip install massive-serve --upgrade

Usage

List of currently supported datastores can be found in massive-serve collection. I will keep adding more domains and retriever combinations! Open an issue to request new datastores ๐Ÿ˜‰.

To serve a demo datastore:

massive-serve serve --domain_name demo

To serve a wikipedia datastore:

massive-serve serve --domain_name dpr_wiki_contriever_ivfpq

Useful notes:

  • To avoid manually specifying the data storage location (e.g., in slurm jobs), set the DATASTORE_PATH environment variable to your desired data directory.
  • To specify the nprobe (default to 64, which defines how many clusters out of 2024 you'd like to performance search in IVF index), just add nprobe: XX in your curl request.

It will then download and serve the index and print the API and one example request in the terminal, e.g.,

โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘                    MASSIVE SERVE SERVER                    โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘ Domain: demo                                               โ•‘
โ•‘ Server: XXX                                                โ•‘
โ•‘ Port:   XXX                                                โ•‘
โ•‘ Endpoint: XXX@XXX:XXX/search                               โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•


Test your server with this curl command:

curl -X POST XXX@XXX:XXX/search -H "Content-Type: application/json" -d '{"query": "Tell me more about the stories of Einstein.", "n_docs": 1, "domains": "demo"}'

Send Requests

If the API has been served, you can either send single or bulk query requests to it.

Bash Examples.

# single-query request
curl -X POST <user>@<address>:<port>/search -H "Content-Type: application/json" -d '{"query": "Where was Marie Curie born?", "n_docs": 1, "domains": "dpr_wiki_contriever"}'

# multi-query request
curl -X POST <user>@<address>:<port>/search -H "Content-Type: application/json" -d '{"query": ["Where was Marie Curie born?", "What is the capital of France?", "Who invented the telephone?"], "n_docs": 2, "dpr_wiki_contriever": "MassiveDS"}'

Python Example.

import requests

json_data = {
    'query': 'Where was Marie Curie born?',
    "n_docs": 20,
    "domains": "dpr_wiki_contriever"
}
headers = {"Content-Type": "application/json"}

# Add 'http://' to the URL if it is not SSL/TLS secured, otherwise use 'https://'
response = requests.post('http://<user>@<address>:<port>/search', json=json_data, headers=headers)

print(response.status_code)
print(response.json())

Example output of a multi-query request:

{
  "message": "Search completed for '['Where was Marie Curie born?', 'What is the capital of France?', 'Who invented the telephone?']' from MassiveDS",
  "n_docs": 2,
  "query": [
    "Where was Marie Curie born?",
    "What is the capital of France?",
    "Who invented the telephone?"
  ],
  "results": {
    "n_docs": 2,
    "query": [
      "Where was Marie Curie born?",
      "What is the capital of France?",
      "Who invented the telephone?"
    ],
    "results": {
      "IDs": [
        [
          [3, 3893807],
          [17, 11728753]
        ],
        [
          [14, 12939685],
          [22, 1070951]
        ],
        [
          [28, 18823956],
          [22, 10406782]
        ]
      ],
      "passages": [
        [
          "Marie Skล‚odowska Curie (November 7, 1867 โ€“ July 4, 1934) was a physicist and chemist of Polish upbringing and, subsequently, French citizenship. ...",
          "=> Maria Skล‚odowska, better known as Marie Curie, was born on 7 November in Warsaw, Poland. ..."
        ],
        [
          "Paris is the capital and most populous city in France, as well as the administrative capital of the region of รŽle-de-France. ...",
          "[paสi] ( listen)) is the capital and largest city of France. ..."
        ],
        [
          "Antonio Meucci (Florence, April 13, 1808 โ€“ October 18, 1889) was an Italian inventor. ...",
          "The telephone or phone is a telecommunications device that transmits speech by means of electric signals. ..."
        ]
      ],
      "scores": [
        [
          1.8422218561172485,
          1.8394594192504883
        ],
        [
          1.5528039932250977,
          1.5502511262893677
        ],
        [
          1.714379906654358,
          1.706493854522705
        ]
      ]
    }
  }
}

Massive Serve Developer Guide

Environment Setup

Using Conda (Recommended for GPU support)

  1. Create a new conda environment:
git clone https://github.com/RulinShao/massive-serve.git
cd massive-serve
conda env create -f conda-env.yml
conda activate massive-serve

To update the existing environment:

conda env update -n massive-serve -f conda-env.yml

Upload new index

python -m massive_serve.cli upload-data --domain_name demo

Test serving the index:

python -m massive_serve.cli serve --domain_name demo

Update package

Make sure the version in the setup.py has been updated to a different version. Then run:

rm -rf dist/ build/ massive_serve.egg-info/
pip install build twine
python -m build
python -m twine upload dist/*

Users can refresh their installed repo via:

pip install --upgrade massive-serve

License

This project is licensed under the MIT License - see the LICENSE file for details.

Citation

If you find our package helpful, please cite:

@article{shao2024scaling,
  title={Scaling retrieval-based language models with a trillion-token datastore},
  author={Shao, Rulin and He, Jacqueline and Asai, Akari and Shi, Weijia and Dettmers, Tim and Min, Sewon and Zettlemoyer, Luke and Koh, Pang Wei W},
  journal={Advances in Neural Information Processing Systems},
  volume={37},
  pages={91260--91299},
  year={2024}
}

@software{massiveserve2025,
  author = {Shao, Rulin},
  title  = {MassiveServe: Serving and Sharing Massive Datastores},
  year   = 2025,
  url    = {https://github.com/RulinShao/massive-serve}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

massive_serve-0.1.20.tar.gz (34.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

massive_serve-0.1.20-py3-none-any.whl (42.5 kB view details)

Uploaded Python 3

File details

Details for the file massive_serve-0.1.20.tar.gz.

File metadata

  • Download URL: massive_serve-0.1.20.tar.gz
  • Upload date:
  • Size: 34.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for massive_serve-0.1.20.tar.gz
Algorithm Hash digest
SHA256 39028ee786d9e84bca643c844c0c16d2f5daea0a3012f21a7911a39908270386
MD5 c14c80bdfd6dbb4306ed7424aebdb408
BLAKE2b-256 83377a6d2ea705032387b5a77e2a3b5e13a21fbe5b87445aec893e42a40fb7db

See more details on using hashes here.

File details

Details for the file massive_serve-0.1.20-py3-none-any.whl.

File metadata

  • Download URL: massive_serve-0.1.20-py3-none-any.whl
  • Upload date:
  • Size: 42.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for massive_serve-0.1.20-py3-none-any.whl
Algorithm Hash digest
SHA256 25ee66a722dfb897e8f29bf870486e3d18957334f63ee5f2b5cd78c106f77193
MD5 b9802cd754d394cfed35fbc551fd2de0
BLAKE2b-256 2fc94663c1abf2808766dc6b1e948fd46e1c9614e8c7121920a1cdc9d5bb373c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page