Distributed framework to accelerate training and support serving.
OpenEmbedding is a distributed framework to accelerate TensorFlow training and support TensorFlow Serving. It uses the parameter server architecture to store the Embedding Layer. So that single machine memory is not the limit of model size. OpenEmbedding can cooperate with all-reduce framework to support both data parallel and model parallel.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.