Skip to main content

Simple package for ML Model serving with gRPC

Project description

AxonServe

Simple package for ML Model serving with gRPC

Installation

$ pip install axon_serve

Usage

Server

from axon_serve import PredictionService, GRPCService

# implement PredictionService class
class TestPredictionService(PredictionService):
    def __init__(self)
        super().__init__()

	# override predict method with your custom prediction logic
	# model_input and return values must be numpy arrays
	# params is a dict with optional kwargs for model
    def predict(self, model_input, params):

        print("model_input: ", model_input.shape)
        print("params: ", params)

        return model_input

if __name__ == "__main__":
	# instantiate your prediction service
	test_prediction_service = TestPredictionService()

	# register it with GRPCService providing the port
	service = GRPCService(test_prediction_service, port=5005)

	# start the server
	service.start()

Client

from axon_serve import ModelServeClient

# instantiate the client providing host and port
serve_client = ModelServeClient(host="localhost", port=5005)

model_input = np.array([1, 2, 3])
params = {"test_param": true}

# call predict method to send request to server
result = serve_client.predict(model_input, params)

print(result.shape)

TODOs

  • add tests
  • add support for secure channels
  • add support for arbitrary tensor serialization

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

axon_serve-0.0.1.tar.gz (6.6 kB view hashes)

Uploaded Source

Built Distribution

axon_serve-0.0.1-py3-none-any.whl (9.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page