Skip to main content

A package for massive serving

Project description

Massive Serve User Guide

One command to download and serve a datastore---that's it ๐Ÿ˜Ž.

Installation

pip install massive-serve --upgrade

Usage

List of currently supported datastores can be found in massive-serve collection. I will keep adding more domains and retriever combinations! Open an issue to request new datastores ๐Ÿ˜‰.

To serve a demo datastore:

massive-serve serve --domain_name demo

To serve a wikipedia datastore:

massive-serve serve --domain_name dpr_wiki_contriever_ivfpq

Useful notes:

  • To avoid manually specifying the data storage location (e.g., in slurm jobs), set the DATASTORE_PATH environment variable to your desired data directory.
  • To specify the nprobe (default to 64, which defines how many clusters out of 2024 you'd like to performance search in IVF index), just add nprobe: XX in your curl request.

It will then download and serve the index and print the API and one example request in the terminal, e.g.,

โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘                    MASSIVE SERVE SERVER                    โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘ Domain: demo                                               โ•‘
โ•‘ Server: XXX                                                โ•‘
โ•‘ Port:   XXX                                                โ•‘
โ•‘ Endpoint: XXX@XXX:XXX/search                               โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•


Test your server with this curl command:

curl -X POST XXX@XXX:XXX/search -H "Content-Type: application/json" -d '{"query": "Tell me more about the stories of Einstein.", "n_docs": 1, "domains": "demo"}'

Send Requests

If the API has been served, you can either send single or bulk query requests to it.

Bash Examples.

# single-query request
curl -X POST <user>@<address>:<port>/search -H "Content-Type: application/json" -d '{"query": "Where was Marie Curie born?", "n_docs": 1, "domains": "dpr_wiki_contriever"}'

# multi-query request
curl -X POST <user>@<address>:<port>/search -H "Content-Type: application/json" -d '{"query": ["Where was Marie Curie born?", "What is the capital of France?", "Who invented the telephone?"], "n_docs": 2, "dpr_wiki_contriever": "MassiveDS"}'

Python Example.

import requests

json_data = {
    'query': 'Where was Marie Curie born?',
    "n_docs": 20,
    "domains": "dpr_wiki_contriever"
}
headers = {"Content-Type": "application/json"}

# Add 'http://' to the URL if it is not SSL/TLS secured, otherwise use 'https://'
response = requests.post('http://<user>@<address>:<port>/search', json=json_data, headers=headers)

print(response.status_code)
print(response.json())

Example output of a multi-query request:

{
  "message": "Search completed for '['Where was Marie Curie born?', 'What is the capital of France?', 'Who invented the telephone?']' from MassiveDS",
  "n_docs": 2,
  "query": [
    "Where was Marie Curie born?",
    "What is the capital of France?",
    "Who invented the telephone?"
  ],
  "results": {
    "n_docs": 2,
    "query": [
      "Where was Marie Curie born?",
      "What is the capital of France?",
      "Who invented the telephone?"
    ],
    "results": {
      "IDs": [
        [
          [3, 3893807],
          [17, 11728753]
        ],
        [
          [14, 12939685],
          [22, 1070951]
        ],
        [
          [28, 18823956],
          [22, 10406782]
        ]
      ],
      "passages": [
        [
          "Marie Skล‚odowska Curie (November 7, 1867 โ€“ July 4, 1934) was a physicist and chemist of Polish upbringing and, subsequently, French citizenship. ...",
          "=> Maria Skล‚odowska, better known as Marie Curie, was born on 7 November in Warsaw, Poland. ..."
        ],
        [
          "Paris is the capital and most populous city in France, as well as the administrative capital of the region of รŽle-de-France. ...",
          "[paสi] ( listen)) is the capital and largest city of France. ..."
        ],
        [
          "Antonio Meucci (Florence, April 13, 1808 โ€“ October 18, 1889) was an Italian inventor. ...",
          "The telephone or phone is a telecommunications device that transmits speech by means of electric signals. ..."
        ]
      ],
      "scores": [
        [
          1.8422218561172485,
          1.8394594192504883
        ],
        [
          1.5528039932250977,
          1.5502511262893677
        ],
        [
          1.714379906654358,
          1.706493854522705
        ]
      ]
    }
  }
}

Massive Serve Developer Guide

Environment Setup

Using Conda (Recommended for GPU support)

  1. Create a new conda environment:
git clone https://github.com/RulinShao/massive-serve.git
cd massive-serve
conda env create -f conda-env.yml
conda activate massive-serve

To update the existing environment:

conda env update -n massive-serve -f conda-env.yml

Upload new index

python -m massive_serve.cli upload-data --domain_name demo

Test serving the index:

python -m massive_serve.cli serve --domain_name demo

Update package

Make sure the version in the setup.py has been updated to a different version. Then run:

rm -rf dist/ build/ massive_serve.egg-info/
pip install build twine
python -m build
python -m twine upload dist/*

Users can refresh their installed repo via:

pip install --upgrade massive-serve

License

This project is licensed under the MIT License - see the LICENSE file for details.

Citation

If you find our package helpful, please cite:

@article{shao2024scaling,
  title={Scaling retrieval-based language models with a trillion-token datastore},
  author={Shao, Rulin and He, Jacqueline and Asai, Akari and Shi, Weijia and Dettmers, Tim and Min, Sewon and Zettlemoyer, Luke and Koh, Pang Wei W},
  journal={Advances in Neural Information Processing Systems},
  volume={37},
  pages={91260--91299},
  year={2024}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

massive_serve-0.1.14.tar.gz (23.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

massive_serve-0.1.14-py3-none-any.whl (28.6 kB view details)

Uploaded Python 3

File details

Details for the file massive_serve-0.1.14.tar.gz.

File metadata

  • Download URL: massive_serve-0.1.14.tar.gz
  • Upload date:
  • Size: 23.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for massive_serve-0.1.14.tar.gz
Algorithm Hash digest
SHA256 aa4f731790a7503d426c96ec4ac163331509299fa8b824ab3faf500cb5bf5031
MD5 3ed1c6d2f243bae619e30783d4831b23
BLAKE2b-256 50b9a45c884d5548781c2aa6bcc8ec4d2864cd005e68e8a04e586f6729df6deb

See more details on using hashes here.

File details

Details for the file massive_serve-0.1.14-py3-none-any.whl.

File metadata

  • Download URL: massive_serve-0.1.14-py3-none-any.whl
  • Upload date:
  • Size: 28.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for massive_serve-0.1.14-py3-none-any.whl
Algorithm Hash digest
SHA256 236b452fe837da4e3994d6e3f4e077120d497aa73afc89a1aa6600e9a5805509
MD5 aeecfd3dfa514cac223c26c155f0d4f9
BLAKE2b-256 7ab343509b5f0d062beb8bcafd43f04e33bb3b9986eee2cd8725c42fa105b94f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page