Skip to main content

No project description provided

Project description

GDMix

What is it

Generalized Deep Mixed Model (GDMix) is a framework to train non-linear fixed effect and random effect models. This kind of models are widely used in personalization of search and recommender systems. This project is an extension of our early effort on generalized linear models Photon ML. It is implemented in Tensorflow, Scipy and Spark.

The current version of GDMix supports logistic regression and DeText models for the fixed effect, then logistic regression for the random effects. In the future, we may support deep models for random effects if the increase complexity can be justified by improvement in relevance metrics.

Supported models

Logistic regression

As a basic classification model, logistic regression finds wide usage in search and recommender systems due to its model simplicity and training efficiency. Our implementation uses Tensorflow for data reading and gradient computation, and utilizes L-BFGS solver from Scipy. This combination takes advantage of the versatility of Tensorflow and fast convergence of L-BFGS. This mode is functionally equivalent to Photon-ML but with improved efficiency. Our internal tests show about 10% to 40% training speed improvement on various datasets.

DeText models

DeText is a framework for ranking with emphasis on textual features. GDMix supports DeText training natively as a global model. A user can specify a fixed effect model type as DeText then provide the network specifications. GDMix will train and score it automatically and connect the model to the subsequent random effect models. Currently only the pointwise loss function from DeText is allowed to be connected with the logistic regression random effect models.

Other models

GDMix can work with any deep learning fixed effect models. The interface between GDMix and other models is at the file I/O. A user can train a model outside GDMix, then score the training data with the model and save the scores in files, which are the input to the GDMix random effect training. This enables the user to train random effect models based on scores from a custom fixed effect model that is not natively supported by GDMix.

Training efficiency

For logistic regression models, the training efficiency is achieved by parallel training. Since the fixed effect model is usually trained on a large amount of data, synchronous training based on Tensorflow all-reduce operation is utilized. Each worker takes a portion of the training data and compute the local gradient. The gradients are aggregated then fed to the L-BFGS solver. The training dataset for each random effect model is usually small, however the number of models (e.g. individual models for all LinkedIn members) can be on the order of hundred of millions. This requires a partitioning and parallel training strategy, where each worker is responsible for a portion of the population and all the workers train their assigned models independently and simultaneously.

For DeText models, efficiency is achieved by either Tensorflow based parameter server asynchronous distributed training or Horovod based synchronous distributed training.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gdmix-trainer-0.3.0.tar.gz (37.9 kB view details)

Uploaded Source

Built Distribution

gdmix_trainer-0.3.0-py3-none-any.whl (47.3 kB view details)

Uploaded Python 3

File details

Details for the file gdmix-trainer-0.3.0.tar.gz.

File metadata

  • Download URL: gdmix-trainer-0.3.0.tar.gz
  • Upload date:
  • Size: 37.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/50.3.1.post20201107 requests-toolbelt/0.9.1 tqdm/4.52.0 CPython/3.7.4

File hashes

Hashes for gdmix-trainer-0.3.0.tar.gz
Algorithm Hash digest
SHA256 19e8816438abca72f75440d8723e0eddbf0e4c2c901ed13a0a6f6f84d719292a
MD5 9dce24f124eef44b08bf599f95b3473f
BLAKE2b-256 b6181d6e15879b14f41dcf837c73158fb4fe722db54e1d5d43884487ddc3bae4

See more details on using hashes here.

File details

Details for the file gdmix_trainer-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: gdmix_trainer-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 47.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/50.3.1.post20201107 requests-toolbelt/0.9.1 tqdm/4.52.0 CPython/3.7.4

File hashes

Hashes for gdmix_trainer-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e5cb27a85d1a32e90dbe23b1836c920716c23b0a511a1bf1a94cec603f1720b4
MD5 9a667aa1746bb502c1a53e36b0850d35
BLAKE2b-256 99d8bb4d7af42e9be1a691b472f055c10c6337efd34f81b614fd35226cffffd6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page