fastText model serving API server
Project description
fasttext-serving
fastText model serving service
Installation
You can download prebuilt binary from GitHub releases, or install it using Cargo:
cargo install fasttext-serving
Using Docker:
docker pull messense/fasttext-serving
Usage
$ fasttext-serving --help
USAGE:
fasttext-serving [OPTIONS] --model <model>
FLAGS:
--grpc Serving gRPC API instead of HTTP API
-h, --help Prints help information
-V, --version Prints version information
OPTIONS:
-a, --address <address> Listen address [default: 127.0.0.1]
-m, --model <model> Model path
-p, --port <port> Listen port [default: 8000]
-w, --workers <workers> Worker thread count, defaults to CPU count
Serve HTTP REST API
HTTP API endpoint:
POST /predict
Post data should be JSON array of string, for example ["abc", "def"]
CURL example:
$ curl -X POST -H 'Content-Type: application/json' \
--data "[\"Which baking dish is best to bake a banana bread?\", \"Why not put knives in the dishwasher?\"]" \
'http://localhost:8000/predict'
[[["baking"],[0.7152988]],[["equipment"],[0.73479545]]]
Serve gRPC API
Run the command with --grpc
to serve gRPC API instead of HTTP REST API.
Please refer to gRPC Python client documentation here.
License
This work is released under the MIT license. A copy of the license is provided in the LICENSE file.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Close
Hashes for fasttext_serving_server-0.6.2.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | fdd67c6fbd04ee45fd7e51559e7552287efc90b45390629188bcd1b4df5b50b7 |
|
MD5 | 1454469b7414b2cc16571ca57e163612 |
|
BLAKE2b-256 | 747fa71e7542082f251c2c6f8eea30ee33c9da021a4e5cfea01f19c45d539817 |