Run a AllenNLP trained model, and serve it with WebAPI.
Project description
allennlp-runmodel
Run a AllenNLP trained model, and serve it with WebAPI.
Usage
Run the program
Execute the program in terminator, the option --help
will show help message:
$ allennlp-runmodel --help
Usage: allennlp-runmodel [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]...
Start a webservice for running AllenNLP models.
Options:
-V, --version
-h, --host TEXT TCP/IP host for HTTP server. [default:
localhost]
-p, --port INTEGER TCP/IP port for HTTP server. [default:
8000]
-a, --path TEXT File system path for HTTP server Unix domain
socket. Listening on Unix domain sockets is
not supported by all operating systems.
-l, --logging-config FILE Path to logging configuration file (JSON,
YAML or INI) (ref: https://docs.python.org/l
ibrary/logging.config.html#logging-config-
dictschema)
-v, --logging-level [critical|fatal|error|warn|warning|info|debug|notset]
Sets the logging level, only affected when
`--logging-config` not specified. [default:
info]
--help Show this message and exit.
Commands:
load Load a pre-trained AllenNLP model from it's archive file, and put
it...
and
$ allennlp-runmodel load --help
Usage: allennlp-runmodel load [OPTIONS] ARCHIVE
Load a pre-trained AllenNLP model from it's archive file, and put it into
the webservice contrainer.
Options:
-m, --model-name TEXT Model name used in URL. eg: http://xxx.xxx.x
xx.xxx:8000/?model=model_name
-t, --num-threads INTEGER Sets the number of OpenMP threads used for
parallelizing CPU operations. [default: 4
(on this machine)]
-w, --max-workers INTEGER Uses a pool of at most max_workers threads
to execute calls asynchronously. [default:
num_threads/cpu_count (1 on this machine)]
-w, --worker-type [process|thread]
Sets the workers execute in thread or
process. [default: process]
-d, --cuda-device INTEGER If CUDA_DEVICE is >= 0, the model will be
loaded onto the corresponding GPU. Otherwise
it will be loaded onto the CPU. [default:
-1]
-e, --predictor-name TEXT Optionally specify which `Predictor`
subclass; otherwise, the default one for the
model will be used.
--help Show this message and exit.
load
sub-command can be called many times to load multiple models.
eg:
allennlp-runmodel --port 8080 load --model-name model1 /path/of/model1.tar.gz load --model-name model2 /path/of/model2.tar.gz
Make prediction from HTTP client
curl \
--header "Content-Type: application/json" \
--request POST \
--data '{"premise":"Two women are embracing while holding to go packages.","hypothesis":"The sisters are hugging goodbye while holding to go packages after just eating lunch."}' \
http://localhost:8080/?model=model1
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
allennlp-runmodel-0.2.1.tar.gz
(18.4 kB
view details)
Built Distribution
File details
Details for the file allennlp-runmodel-0.2.1.tar.gz
.
File metadata
- Download URL: allennlp-runmodel-0.2.1.tar.gz
- Upload date:
- Size: 18.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.21.0 setuptools/40.6.3 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ce95b33bc41cbd722bab6a3d0d4a5799364fe6d8e7f803f3fd909ec5d32ae9f6 |
|
MD5 | da7cad4d8f6010ceab43b22ef520df96 |
|
BLAKE2b-256 | 4edfd031d822dbc8e0e53fdc991f7700b9189b0d5a019b5e5b8b99e72757ac34 |
File details
Details for the file allennlp_runmodel-0.2.1-py3-none-any.whl
.
File metadata
- Download URL: allennlp_runmodel-0.2.1-py3-none-any.whl
- Upload date:
- Size: 9.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.21.0 setuptools/40.6.3 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 843dd5a48cae5e9e3226de5f1333d4c90b26cba6ca6cb8dbbb573467e84bc3db |
|
MD5 | 73b26aecbe85b52fb6ec801d44556968 |
|
BLAKE2b-256 | 47a6dfc57aba1e27d63f88b7ed781473638d0615a06041466c4921b934d0db81 |