Skip to main content

A package for massive serving

Project description

Massive Serve User Guide

One command to download and serve a datastore---that's it ๐Ÿ˜Ž.

Installation

pip install massive-serve --upgrade

Usage

List of currently supported datastores can be found in massive-serve collection. I will keep adding more domains and retriever combinations! Open an issue to request new datastores ๐Ÿ˜‰.

To serve a demo datastore:

massive-serve serve --domain_name demo

To serve a wikipedia datastore:

massive-serve serve --domain_name dpr_wiki_contriever_ivfpq

Useful notes:

  • To avoid manually specifying the data storage location (e.g., in slurm jobs), set the DATASTORE_PATH environment variable to your desired data directory.
  • To specify the nprobe (default to 64, which defines how many clusters out of 2024 you'd like to performance search in IVF index), just add nprobe: XX in your curl request.

It will then download and serve the index and print the API and one example request in the terminal, e.g.,

โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘                    MASSIVE SERVE SERVER                    โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘ Domain: demo                                               โ•‘
โ•‘ Server: XXX                                                โ•‘
โ•‘ Port:   XXX                                                โ•‘
โ•‘ Endpoint: XXX@XXX:XXX/search                               โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•


Test your server with this curl command:

curl -X POST XXX@XXX:XXX/search -H "Content-Type: application/json" -d '{"query": "Tell me more about the stories of Einstein.", "n_docs": 1, "domains": "demo"}'

Send Requests

If the API has been served, you can either send single or bulk query requests to it.

Bash Examples.

# single-query request
curl -X POST <user>@<address>:<port>/search -H "Content-Type: application/json" -d '{"query": "Where was Marie Curie born?", "n_docs": 1, "domains": "dpr_wiki_contriever"}'

# multi-query request
curl -X POST <user>@<address>:<port>/search -H "Content-Type: application/json" -d '{"query": ["Where was Marie Curie born?", "What is the capital of France?", "Who invented the telephone?"], "n_docs": 2, "dpr_wiki_contriever": "MassiveDS"}'

Python Example.

import requests

json_data = {
    'query': 'Where was Marie Curie born?',
    "n_docs": 20,
    "domains": "dpr_wiki_contriever"
}
headers = {"Content-Type": "application/json"}

# Add 'http://' to the URL if it is not SSL/TLS secured, otherwise use 'https://'
response = requests.post('http://<user>@<address>:<port>/search', json=json_data, headers=headers)

print(response.status_code)
print(response.json())

Example output of a multi-query request:

{
  "message": "Search completed for '['Where was Marie Curie born?', 'What is the capital of France?', 'Who invented the telephone?']' from MassiveDS",
  "n_docs": 2,
  "query": [
    "Where was Marie Curie born?",
    "What is the capital of France?",
    "Who invented the telephone?"
  ],
  "results": {
    "n_docs": 2,
    "query": [
      "Where was Marie Curie born?",
      "What is the capital of France?",
      "Who invented the telephone?"
    ],
    "results": {
      "IDs": [
        [
          [3, 3893807],
          [17, 11728753]
        ],
        [
          [14, 12939685],
          [22, 1070951]
        ],
        [
          [28, 18823956],
          [22, 10406782]
        ]
      ],
      "passages": [
        [
          "Marie Skล‚odowska Curie (November 7, 1867 โ€“ July 4, 1934) was a physicist and chemist of Polish upbringing and, subsequently, French citizenship. ...",
          "=> Maria Skล‚odowska, better known as Marie Curie, was born on 7 November in Warsaw, Poland. ..."
        ],
        [
          "Paris is the capital and most populous city in France, as well as the administrative capital of the region of รŽle-de-France. ...",
          "[paสi] ( listen)) is the capital and largest city of France. ..."
        ],
        [
          "Antonio Meucci (Florence, April 13, 1808 โ€“ October 18, 1889) was an Italian inventor. ...",
          "The telephone or phone is a telecommunications device that transmits speech by means of electric signals. ..."
        ]
      ],
      "scores": [
        [
          1.8422218561172485,
          1.8394594192504883
        ],
        [
          1.5528039932250977,
          1.5502511262893677
        ],
        [
          1.714379906654358,
          1.706493854522705
        ]
      ]
    }
  }
}

Massive Serve Developer Guide

Environment Setup

Using Conda (Recommended for GPU support)

  1. Create a new conda environment:
git clone https://github.com/RulinShao/massive-serve.git
cd massive-serve
conda env create -f conda-env.yml
conda activate massive-serve

To update the existing environment:

conda env update -n massive-serve -f conda-env.yml

Upload new index

python -m massive_serve.cli upload-data --domain_name demo

Test serving the index:

python -m massive_serve.cli serve --domain_name demo

Update package

Make sure the version in the setup.py has been updated to a different version. Then run:

rm -rf dist/ build/ massive_serve.egg-info/
pip install build twine
python -m build
python -m twine upload dist/*

Users can refresh their installed repo via:

pip install --upgrade massive-serve

License

This project is licensed under the MIT License - see the LICENSE file for details.

Citation

If you find our package helpful, please cite:

@article{shao2024scaling,
  title={Scaling retrieval-based language models with a trillion-token datastore},
  author={Shao, Rulin and He, Jacqueline and Asai, Akari and Shi, Weijia and Dettmers, Tim and Min, Sewon and Zettlemoyer, Luke and Koh, Pang Wei W},
  journal={Advances in Neural Information Processing Systems},
  volume={37},
  pages={91260--91299},
  year={2024}
}

@software{massiveserve2025,
  author = {Shao, Rulin},
  title  = {MassiveServe: Serving and Sharing Massive Datastores},
  year   = 2025,
  url    = {https://github.com/RulinShao/massive-serve}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

massive_serve-0.1.18.tar.gz (34.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

massive_serve-0.1.18-py3-none-any.whl (42.4 kB view details)

Uploaded Python 3

File details

Details for the file massive_serve-0.1.18.tar.gz.

File metadata

  • Download URL: massive_serve-0.1.18.tar.gz
  • Upload date:
  • Size: 34.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for massive_serve-0.1.18.tar.gz
Algorithm Hash digest
SHA256 d69f1dc1de3837929d014597fc27a6e0cd63e71479e25047d11a7ca1b2838bf9
MD5 97bfa87ac8a520d75333582b03524781
BLAKE2b-256 43be3b6f7c877b35931207b5e26339521e73a2c3e5a994a69d00720837cf4494

See more details on using hashes here.

File details

Details for the file massive_serve-0.1.18-py3-none-any.whl.

File metadata

  • Download URL: massive_serve-0.1.18-py3-none-any.whl
  • Upload date:
  • Size: 42.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for massive_serve-0.1.18-py3-none-any.whl
Algorithm Hash digest
SHA256 f18f492b3360b7545504498b13e52347d2e663a93dc0251d94d17e610d88b43d
MD5 77b499863bd26ac185617abf7179cc2d
BLAKE2b-256 602459d2fee4685674914d93a44b210d56de17aceed57013792c82bb3e3c5209

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page