Boosting your web service of deep learning applications
Boosting your Web Services of Deep Learning Applications. 中文README
• Made by ShannonAI • :globe_with_meridians: http://www.shannonai.com/
What is Service Streamer ?
A mini-batch collects data samples and is usually used in deep learning models. In this way, models can utilize the parallel computing capability of GPUs. However, requests from users for web services are usually discrete. If using conventional loop server or threaded server, GPUs will be idle dealing with one request at a time. And the latency time will be linearly increasing when there are concurrent user requests.
ServiceStreamer is a middleware for web service of machine learning applications. Queue requests from users are sampled into mini-batches. ServiceStreamer can significantly enhance the overall performance of the system by improving GPU utilization.
- :hatching_chick: Easy to use: Minor changes can speed up the model ten times.
- :zap: Fast processing speed: Low latency for online inference of machine learning models.
- :octopus: Good expandability: Easy to be applied to multi-GPU scenarios for handling enormous requests.
- :crossed_swords: Applicability: Used with any web frameworks and/or deep learning frameworks.
Install ServiceStream by using
pip，requires Python >= 3.5 :
pip install service_streamer
Develop BERT Service in 5 Minutes
We provide a step-by-step tutorial for you to bring BERT online in 5 minutes. The service processes 1400 sentences per second.
Text Infilling is a task in natural language processing: given a sentence with several words randomly removed, the model predicts those words removed through the given context.
BERT has attracted a lot of attention in these two years and it achieves State-Of-The-Art results across many nlp tasks. BERT utilizes "Masked Language Model (MLM)" as one of the pre-training objectives. MLM models randomly mask some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based on its context. MLM has similarities with text infilling. It is natural to introduce BERT to text infilling task.
First, we define a model for text filling task bert_model.py. The
predictfunction accepts a batch of sentences and returns predicted position results of the
class TextInfillingModel(object): ... batch = ["twinkle twinkle [MASK] star.", "Happy birthday to [MASK].", 'the answer to life, the [MASK], and everything.'] model = TextInfillingModel() outputs = model.predict(batch) print(outputs) # ['little', 'you', 'universe']
Note: Please download pre-trained BERT model at first.
model = TextInfillingModel() @app.route("/naive", methods=["POST"]) def naive_predict(): inputs = request.form.getlist("s") outputs = model.predict(inputs) return jsonify(outputs) app.run(port=5005)
Please run flask_example.py, then you will get a vanilla Web server.
curl -X POST http://localhost:5005/naive -d 's=Happy birthday to [MASK].' ["you"]
At this time, your web server can only serve 12 requests per second. Please see benchmark for more details.
Third, encapsulate model functions through
service_streamer. Three lines of code make the prediction speed of BERT service reach 200+ sentences per second (16x faster).
from service_streamer import ThreadedStreamer streamer = ThreadedStreamer(model.predict, batch_size=64, max_latency=0.1) @app.route("/stream", methods=["POST"]) def stream_predict(): inputs = request.form.getlist("s") outputs = streamer.predict(inputs) return jsonify(outputs) app.run(port=5005, debug=False)
wrk -t 2 -c 128 -d 20s --timeout=10s -s benchmark.lua http://127.0.0.1:5005/stream ... Requests/sec: 200.31
Finally, encapsulate models through
Streamerand start service workers on multiple GPUs.
Streamerfurther accelerates inference speed and achieves 1000+ sentences per second (80x faster).
from service_streamer import ManagedModel, Streamer class ManagedBertModel(ManagedModel): def init_model(self): self.model = TextInfillingModel() def predict(self, batch): return self.model.predict(batch) streamer = Streamer(ManagedBertModel, batch_size=64, max_latency=0.1, worker_num=8, cuda_devices=(0, 1, 2, 3)) app.run(port=5005, debug=False)
8 gpu workers can be started and evenly distributed on 4 GPUs.
In general, the inference speed will be faster by utilizing parallel computing.
outputs = model.predict(batch_inputs)
ServiceStreamer is a middleware for web service of machine learning applications. Queue requests from users are scheduled into mini-batches and forward into GPU workers. ServiceStreamer sacrifices a certain delay (default maximum is 0.1s) and enhance the overall performance by improving the ratio of GPU utilization.
from service_streamer import ThreadedStreamer # Encapsulate batch_predict function with Streamer streamer = ThreadedStreamer(model.predict, batch_size=64, max_latency=0.1) # Replace model.predict with streamer.predict outputs = streamer.predict(batch_inputs)
Start web server on multi-threading (or coordination). Your server can usually achieve 10x (
batch_size/batch_per_request) times faster by adding a few lines of code.
Distributed GPU worker
The performance of web server (QPS) in practice is much higher than that of GPU model. We also support one web server with multiple GPU worker processes.
from service_streamer import Streamer # Spawn releases 4 gpu worker processes streamer = Streamer(model.predict, 64, 0.1, worker_num=4) outputs = streamer.predict(batch)
spawn subprocesses to run gpu workers by default.
Streamer uses interprocess queues to communicate and queue. It can distribute a large number of requests to multiple workers for processing.
Then the prediction results of the model are returned to the corresponding web server in batches. And results are forwarded to the corresponding http response.
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 390.116 Driver Version: 390.116 | |-------------------------------+----------------------+----------------------+ ... +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 7574 C /home/liuxin/nlp/venv/bin/python 1889MiB | | 1 7575 C /home/liuxin/nlp/venv/bin/python 1889MiB | | 2 7576 C /home/liuxin/nlp/venv/bin/python 1889MiB | | 3 7577 C /home/liuxin/nlp/venv/bin/python 1889MiB | +-----------------------------------------------------------------------------+
The above method is simple to define, but the main process initialization model takes up an extra portion of memory. And the model can only run on the same GPU.
Therefore, we have provided the
ManagedModel class to facilitate model lazy initialization and migration while supporting multiple GPUs.
from service_streamer import ManagedModel class ManagedBertModel(ManagedModel): def init_model(self): self.model = Model() def predict(self, batch): return self.model.predict(batch) # Spawn produces 4 gpu worker processes, which are evenly distributed on 0/1/2/3 GPU streamer = Streamer(ManagedBertModel, 64, 0.1, worker_num=4, cuda_devices=(0, 1, 2, 3)) outputs = streamer.predict(batch)
Distributed Web Server
Some cpu-intensive calculations, such as image and text preprocessing, need to be done first in web server. The preprocessed data is then forward into GPU worker for predictions. CPU resources often become performance bottlenecks in practice. Therefore, we also provide the mode of multi-web servers matching (single or multiple) gpu workers.
RedisStream to specify a unique Redis address for all web servers and gpu workers.
# default parameters can be omitted and localhost:6379 is used. streamer = RedisStreamer(redis_broker="172.22.22.22:6379")
We make use of
uwsgi to implement reverse proxy and load balancing.
cd example gunicorn -c redis_streamer_gunicorn.py flask_example:app
Each request will be load balanced to each web server for cpu preprocessing, and then evenly distributed to gpu worker for model prediction.
You might be familiar with
future if you have used any concurrent library.
You can use the Future API directly if you want to use
service_streamer for queueing requests or distributed GPU computing and using scenario is not web service.
from service_streamer import ThreadedStreamer streamer = ThreadedStreamer(model.predict, 64, 0.1) xs =  for i in range(200): future = streamer.submit([["How", "are", "you", "?"], ["Fine", "."], ["Thank", "you", "."]]) xs.append(future) # Get all instances of future object and wait for asynchronous responses. for future in xs: outputs = future.result() print(outputs)
We utilize wrk to conduct benchmark test.
Test examples and scripts can be found in example.
- gpu: Titan Xp
- cuda: 9.0
- pytorch: 1.1
Single GPU process
# start flask threaded server python example/flask_example.py # benchmark naive api without service_streamer wrk -t 4 -c 128 -d 20s --timeout=10s -s benchmark.lua http://127.0.0.1:5005/naive # benchmark stream api with service_streamer wrk -t 4 -c 128 -d 20s --timeout=10s -s benchmark.lua http://127.0.0.1:5005/stream
Multiple GPU processes
The performance loss of the communications and load balancing mechanism of multi-gpu workers are verified compared with a single web server process.
We adopt gevent server because multi-threaded Flask server has become a performance bottleneck. Please refer to the flask_multigpu_example.py
wrk -t 8 -c 512 -d 20s --timeout=10s -s benchmark.lua http://127.0.0.1:5005/stream
Threaded StreamerDue to the limitation of Python GIL, multi-worker is meaningless. We conduct comparison studies using single GPU worker.
StreamerPerformance improvement is not linear when it is greater than 2 gpu worker. The utilization rate of CPU reaches 100. The bottleneck is CPU at this time and the performance issue of flask is the obstacle.
Utilize Future API to start multiple GPU processes
It can be seen that the performance of
service_streamer is almost linearly related to the number of gpu workers. Communications of inter-process in
service_streamer is more efficient than redis.
A: for multi-process inference, if the model process data using numpy with multi-thread, it may cause cpu overheads, resulting in a multi-core computing speed that slower than a single core. This kind of problem may occur when using third-party libraries such as alennlp, spacy, etc. It could be solved by setting
numpy threadsenvironment variables.
import os os.environ["MKL_NUM_THREADS"] = "1" # export MKL_NUM_THREADS=1 os.environ["NUMEXPR_NUM_THREADS"] = "1" # export NUMEXPR_NUM_THREADS=1 os.environ["OMP_NUM_THREADS"] = "1" # export OMP_NUM_THREADS=1 import numpy
make sure putting environment variables before
Q: When using RedisStreamer, if there are only one redis broker and more than one model, the input batches may have different structure. How to deal with such situation?
A: Specify the prefix when initializing worker and streamer, each streamer will use a unique channel.
example of initialiazing workers:
from service_streamer import run_redis_workers_forever from bert_model import ManagedBertModel if __name__ == "__main__": from multiprocessing import freeze_support freeze_support() run_redis_workers_forever(ManagedBertModel, 64, prefix='channel_1') run_redis_workers_forever(ManagedBertModel, 64, prefix='channel_2')
example of using streamer to have result:
from service_streamer import RedisStreamer streamer_1 = RedisStreaemr(prefix='channel_1') streamer_2 = RedisStreaemr(prefix='channel_2') # predict output_1 = streamer_1.predict(batch) output_2 = streamer_2.predict(batch)
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size ym_service_streamer-1.0.5-py3-none-any.whl (16.0 kB)||File type Wheel||Python version py3||Upload date||Hashes View|
|Filename, size ym_service_streamer-1.0.5.tar.gz (17.3 kB)||File type Source||Python version None||Upload date||Hashes View|
Hashes for ym_service_streamer-1.0.5-py3-none-any.whl
Hashes for ym_service_streamer-1.0.5.tar.gz