Skip to main content

A package for massive serving

Project description

Massive Serve User Guide

One command to download and serve a datastore---that's it ๐Ÿ˜Ž.

Installation

pip install massive-serve --upgrade

Usage

List of currently supported datastores can be found in massive-serve collection. I will keep adding more domains and retriever combinations! Open an issue to request new datastores ๐Ÿ˜‰.

To serve a demo datastore:

massive-serve serve --domain_name demo

To serve a wikipedia datastore:

massive-serve serve --domain_name dpr_wiki_contriever_ivfpq

Useful notes:

  • To avoid manually specifying the data storage location (e.g., in slurm jobs), set the DATASTORE_PATH environment variable to your desired data directory.
  • To specify the nprobe (default to 64, which defines how many clusters out of 2024 you'd like to performance search in IVF index), just add nprobe: XX in your curl request.

It will then download and serve the index and print the API and one example request in the terminal, e.g.,

โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘                    MASSIVE SERVE SERVER                    โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘ Domain: demo                                               โ•‘
โ•‘ Server: XXX                                                โ•‘
โ•‘ Port:   XXX                                                โ•‘
โ•‘ Endpoint: XXX@XXX:XXX/search                               โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•


Test your server with this curl command:

curl -X POST XXX@XXX:XXX/search -H "Content-Type: application/json" -d '{"query": "Tell me more about the stories of Einstein.", "n_docs": 1, "domains": "demo"}'

Send Requests

If the API has been served, you can either send single or bulk query requests to it.

Bash Examples.

# single-query request
curl -X POST <user>@<address>:<port>/search -H "Content-Type: application/json" -d '{"query": "Where was Marie Curie born?", "n_docs": 1, "domains": "dpr_wiki_contriever"}'

# multi-query request
curl -X POST <user>@<address>:<port>/search -H "Content-Type: application/json" -d '{"query": ["Where was Marie Curie born?", "What is the capital of France?", "Who invented the telephone?"], "n_docs": 2, "dpr_wiki_contriever": "MassiveDS"}'

Python Example.

import requests

json_data = {
    'query': 'Where was Marie Curie born?',
    "n_docs": 20,
    "domains": "dpr_wiki_contriever"
}
headers = {"Content-Type": "application/json"}

# Add 'http://' to the URL if it is not SSL/TLS secured, otherwise use 'https://'
response = requests.post('http://<user>@<address>:<port>/search', json=json_data, headers=headers)

print(response.status_code)
print(response.json())

Example output of a multi-query request:

{
  "message": "Search completed for '['Where was Marie Curie born?', 'What is the capital of France?', 'Who invented the telephone?']' from MassiveDS",
  "n_docs": 2,
  "query": [
    "Where was Marie Curie born?",
    "What is the capital of France?",
    "Who invented the telephone?"
  ],
  "results": {
    "n_docs": 2,
    "query": [
      "Where was Marie Curie born?",
      "What is the capital of France?",
      "Who invented the telephone?"
    ],
    "results": {
      "IDs": [
        [
          [3, 3893807],
          [17, 11728753]
        ],
        [
          [14, 12939685],
          [22, 1070951]
        ],
        [
          [28, 18823956],
          [22, 10406782]
        ]
      ],
      "passages": [
        [
          "Marie Skล‚odowska Curie (November 7, 1867 โ€“ July 4, 1934) was a physicist and chemist of Polish upbringing and, subsequently, French citizenship. ...",
          "=> Maria Skล‚odowska, better known as Marie Curie, was born on 7 November in Warsaw, Poland. ..."
        ],
        [
          "Paris is the capital and most populous city in France, as well as the administrative capital of the region of รŽle-de-France. ...",
          "[paสi] ( listen)) is the capital and largest city of France. ..."
        ],
        [
          "Antonio Meucci (Florence, April 13, 1808 โ€“ October 18, 1889) was an Italian inventor. ...",
          "The telephone or phone is a telecommunications device that transmits speech by means of electric signals. ..."
        ]
      ],
      "scores": [
        [
          1.8422218561172485,
          1.8394594192504883
        ],
        [
          1.5528039932250977,
          1.5502511262893677
        ],
        [
          1.714379906654358,
          1.706493854522705
        ]
      ]
    }
  }
}

Massive Serve Developer Guide

Environment Setup

Using Conda (Recommended for GPU support)

  1. Create a new conda environment:
git clone https://github.com/RulinShao/massive-serve.git
cd massive-serve
conda env create -f conda-env.yml
conda activate massive-serve

To update the existing environment:

conda env update -n massive-serve -f conda-env.yml

Upload new index

python -m massive_serve.cli upload-data --domain_name demo

Test serving the index:

python -m massive_serve.cli serve --domain_name demo

Update package

Make sure the version in the setup.py has been updated to a different version. Then run:

rm -rf dist/ build/ massive_serve.egg-info/
pip install build twine
python -m build
python -m twine upload dist/*

Users can refresh their installed repo via:

pip install --upgrade massive-serve

License

This project is licensed under the MIT License - see the LICENSE file for details.

Citation

If you find our package helpful, please cite:

@article{shao2024scaling,
  title={Scaling retrieval-based language models with a trillion-token datastore},
  author={Shao, Rulin and He, Jacqueline and Asai, Akari and Shi, Weijia and Dettmers, Tim and Min, Sewon and Zettlemoyer, Luke and Koh, Pang Wei W},
  journal={Advances in Neural Information Processing Systems},
  volume={37},
  pages={91260--91299},
  year={2024}
}

@software{massiveserve2025,
  author = {Shao, Rulin},
  title  = {MassiveServe: Serving and Sharing Massive Datastores},
  year   = 2025,
  url    = {https://github.com/RulinShao/massive-serve}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

massive_serve-0.1.17.tar.gz (34.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

massive_serve-0.1.17-py3-none-any.whl (42.4 kB view details)

Uploaded Python 3

File details

Details for the file massive_serve-0.1.17.tar.gz.

File metadata

  • Download URL: massive_serve-0.1.17.tar.gz
  • Upload date:
  • Size: 34.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for massive_serve-0.1.17.tar.gz
Algorithm Hash digest
SHA256 50aaef419cdfdf1b5f78b8bb080998cd152484d8e52a2ee31894242cc8727913
MD5 73390091a6bffaff2dc8fb5d6bab56e4
BLAKE2b-256 7c1849c145a7dd3a60125b44f0318a3398254945202f57870470752420735839

See more details on using hashes here.

File details

Details for the file massive_serve-0.1.17-py3-none-any.whl.

File metadata

  • Download URL: massive_serve-0.1.17-py3-none-any.whl
  • Upload date:
  • Size: 42.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for massive_serve-0.1.17-py3-none-any.whl
Algorithm Hash digest
SHA256 30b21bb0cd0a34d583e44ec10ed258631aa1270f0449d2cb74390a4cf17a884b
MD5 ed8b4e621133d2bd7f62c4d1b3b936e6
BLAKE2b-256 4b3948441f317c560dd7efe3ac14b978e0ee379a7909b4a6ff8d9032ddbae969

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page