Skip to main content

Nboost is a scalable, search-api-boosting platform for developing and deploying automated SOTA models more relevant search results.

Project description

Nboost

PyPI Documentation Status PyPI - License

HighlightsOverviewBenchmarksInstallGetting StartedDocumentationTutorialsContributingRelease NotesBlog

What is it

NBoost is a scalable, search-engine-boosting platform for developing and deploying state-of-the-art models to improve the relevance of search results.

Nboost leverages finetuned models to produce domain-specific neural search engines. The platform can also improve other downstream tasks requiring ranked input, such as question answering.

Contact us to request domain-specific models or leave feedback

Overview

The workflow of NBoost is relatively simple. Take the graphic above, and imagine that the server in this case is Elasticsearch.

In a conventional search request, the user sends a query to Elasticsearch and gets back the results.

In an NBoost search request, the user sends a query to the model. Then, the model asks for results from Elasticsearch and picks the best ones to return to the user.

Benchmarks

To use one of these fine-tuned models with nboost, run nboost --model_dir bert-base-uncased-msmarco for example, and it will download and cache automatically.

Fine-tuned Models Dependency Domain Search Boost[4] Speed
bert-base-uncased-msmarco(default)[1] TensorFlow bing queries +80% (0.30 vs 0.17) ~300 ms/query[3]
biobert-base-uncased-msmarco TensorFlow biomed +66% (0.17 vs 0.10) ~300 ms/query[3]
bert-tiny-uncased (coming soon) TensorFlow - - -
albert-tiny-uncased (coming soon) TensorFlow - - ~50ms/query [3]

[4] MRR compared to BM25, the default for Elasticsearch. Reranking top 50.

Using pre-trained language understanding models, you can boost search relevance metrics by nearly 2x compared to just text search, with little to no extra configuration. While assessing performance, there is often a tradeoff between model accuracy and speed, so we benchmark both of these factors above. This leaderboard is a work in progress, and we intend on releasing more cutting edge models!

Install NBoost

There are two ways to get NBoost, either as a Docker image or as a PyPi package. For cloud users, we highly recommend using NBoost via Docker.

🚸 Depending on your model, you should install the respective Tensorflow or Pytorch dependencies. We package them below.

For installing NBoost, follow the table below.

Dependency 🐳 Docker 📦 Pypi 🚦Status
Tensorflow (recommended) koursaros/nboost:latest-tf pip install nboost[tf]
Pytorch koursaros/nboost:latest-torch pip install nboost[torch]
All koursaros/nboost:latest-all pip install nboost[all]
- (for testing) koursaros/nboost:latest-alpine pip install nboost

Any way you install it, if you end up reading the following message after $ nboost --help or $ docker run koursaros/nboost --help, then you are ready to go!

success installation of NBoost

Getting Started

📡The Proxy

component overview

The Proxy is the core of NBoost. The proxy is essentially a wrapper to enable serving the model. It is able to understand incoming messages from specific search apis (i.e. Elasticsearch). When the proxy receives a message, it increases the amount of results the client is asking for so that the model can rerank a larger set and return the (hopefully) better results.

For instance, if a client asks for 10 results to do with the query "brown dogs" from Elasticsearch, then the proxy may increase the results request to 100 and filter down the best ten results for the client.

Setting up a Neural Proxy for Elasticsearch in 3 minutes

In this example we will set up a proxy to sit in between the client and Elasticsearch and boost the results!

Installing NBoost with tensorflow

If you want to run the example on a GPU, make sure you have Tensorflow 1.14-1.15 (with CUDA) to support the modeling functionality. However, if you want to just run it on a CPU, don't worry about it. For both cases, just run:

pip install nboost[tf]

Setting up an Elasticsearch Server

🔔 If you already have an Elasticsearch server, you can move on to the next step!

If you don't have Elasticsearch, not to worry! You can set up a local Elasticsearch cluster by using docker. First, get the ES image by running:

docker pull elasticsearch:7.4.2

Once you have the image, you can run an Elasticsearch server via:

docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.4.2

Deploying the proxy

Now we're ready to deploy our Neural Proxy! It is very simple to do this, simply run:

nboost --uport 9200

📢 The --uhost and --uport should be the same as the Elasticsearch server above! Uhost and uport are short for upstream-host and upstream-port (referring to the upstream server).

If you get this message: Listening: <host>:<port>, then we're good to go!

Indexing some data

The proxy is set up so that there is no need to ever talk to the server directly from here on out. You can send index requests, stats requests, but only the search requests will be altered. For demonstration purposes, we will be indexing a set of passages about traveling and hotels through NBoost. You can add the index to your Elasticsearch server by running:

nboost-tutorial Travel --port 8000 --field passage

Now let's test it out! Hit the Elasticsearch with:

curl "http://localhost:8000/travel/_search?pretty&q=passage:vegas&size=2"

If the Elasticsearch result has the _nboost tag in it, congratulations it's working!

What just happened? You asked for two results from Elasticsearch having to do with "vegas". The proxy intercepted this request, asked the Elasticsearch for 10 results, and the model picked the best two. Magic! 🔮 (statistics)

success installation of NBoost

Elastic made easy

To increase the number of parallel proxies, simply increase --workers. For a more robust deployment approach, you can distribute the proxy via Docker Swarm or Kubernetes.

Deploying a proxy via Docker Swarm/Kubernetes

🚧 Swarm yaml/Helm chart under construction...

Documentation

ReadTheDoc

The official NBoost documentation is hosted on nboost.readthedocs.io. It is automatically built, updated and archived on every new release.

Tutorials

🚧 Under construction.

Contributing

Contributions are greatly appreciated! You can make corrections or updates and commit them to NBoost. Here are the steps:

  1. Create a new branch, say fix-nboost-typo-1
  2. Fix/improve the codebase
  3. Commit the changes. Note the commit message must follow the naming style, say Fix/model-bert: improve the readability and move sections
  4. Make a pull request. Note the pull request must follow the naming style. It can simply be one of your commit messages, just copy paste it, e.g. Fix/model-bert: improve the readability and move sections
  5. Submit your pull request and wait for all checks passed (usually 10 minutes)
    • Coding style
    • Commit and PR styles check
    • All unit tests
  6. Request reviews from one of the developers from our core team.
  7. Merge!

More details can be found in the contributor guidelines.

Citing NBoost

If you use NBoost in an academic paper, we would love to be cited. Here are the two ways of citing NBoost:

  1. \footnote{https://github.com/koursaros-ai/nboost}
    
  2. @misc{koursaros2019NBoost,
      title={NBoost: Neural Boosting Search Results},
      author={Thienes, Cole and Pertschuk, Jack},
      howpublished={\url{https://github.com/koursaros-ai/nboost}},
      year={2019}
    }
    

Footnotes

[1] https://github.com/nyu-dl/dl4marco-bert
[2] https://github.com/huggingface/transformers
[3] ms for reranking each hit. On nvidia T4 GPU.

License

If you have downloaded a copy of the NBoost binary or source code, please note that the NBoost binary and source code are both licensed under the Apache License, Version 2.0.

Koursaros AI is excited to bring this open source software to the community.
Copyright (C) 2019. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nboost-0.0.5.tar.gz (834.1 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page