Skip to main content

Experimentalist that uses statistical leverage to determine the next data to sample.

Project description

The Leverage Sampler

This sampler uses the statistical concept of leverage by refitting the provided models iteratively with the leave-one-out method.


WARNING: This sampler needs to fit each model you provide it n times, where n corresponds to the number of datapoints you have. As such, the computational time and power needed to run this sampler increases exponentially with increasing number of models and datapoints.


In each iteration, it computes the degree to which the currently removed datapoint has influence on the model. If the model remains stable, the datapoint is deemed to have little influence on the model, and as such will have a low likelyhood of being selected for further investigation. In contrast, if the model changes, the datapoint is influential on the model, and has a higher likelihood of being selected for further investigation.

Specifically, you provide the sampler with a model that has been trained on all of the data. On each iteration, the sampler fits a new model with all data aside from one datapoint. Both models ($m$) then predict Y scores ($Y'$) from the original X variable and compute a mean squared error (MSE) for each X score ($i$):

$$MSE_{m,i} = \sum(Y'{m,i} - Y{i})^{2}$$

The sampler then computes a ratio of the MSE scores between the sampler model and the original model that you provided:

$${MSE_{Ratio}}{m,i} = {MSE{sampler}}{m,i}/{MSE{original}}_{m}$$ As such, values above one indicates that the original model fit the data better than the sampler model when removing that datapoint ($i$). In contrast, values below one dindicates that the sampler model fit the data better than the original model when removing that datapoint ($i$). And a value of one indicates that both models fit the data equally. If you provide multiple models, it will then average across these models to result in an aggregate MSE score for each X score. In the future, it might be a good idea to incorporate multiple models in a more sophisticated way.

Finally, the sampler then uses these aggregated ratios to select the next set of datapoints to explore in one of three ways, declared with the 'fit' parameter. -'increase' will choose samples focused on X scores where the fits got better (i.e., the smallest MSE ratios) -'decrease' will choose samples focused on X scores where the fits got worse (i.e., the largest MSE ratios) -'both' will do both of the above, or in other words focus on X scores with the most extreme scores.

Example Code

from autora.experimentalist.sampler.leverage import leverage_sample
from autora.theorist.darts import DARTSRegressor; DARTSRegressor()
from sklearn.linear_model import LogisticRegression

#Meta-Setup
X = np.linspace(start=-3, stop=6, num=10).reshape(-1, 1)
y = (X**2).reshape(-1, 1)
n = 5

#Theorists
darts_theorist = DARTSRegressor()
lr_theorist = LogisticRegression()
darts_theorist.fit(X,y)
lr_theorist.fit(X,y)

#Sampler
X_new = leverage_sample(X, y, [darts_theorist, lr_theorist], fit = 'both', n_samples = n)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file autora-experimentalist-sampler-leverage-0.0.1a0.tar.gz.

File metadata

File hashes

Hashes for autora-experimentalist-sampler-leverage-0.0.1a0.tar.gz
Algorithm Hash digest
SHA256 de684793efa811a3894a63433ba3102f2d5a87acdda4a69e83567de9872f344a
MD5 1422d6b160bb3679c4e274c6ab251463
BLAKE2b-256 ec6c91c7418600cfca91f014289776bdf8592d129e648d82f05999f27e4e1dd8

See more details on using hashes here.

File details

Details for the file autora_experimentalist_sampler_leverage-0.0.1a0-py3-none-any.whl.

File metadata

File hashes

Hashes for autora_experimentalist_sampler_leverage-0.0.1a0-py3-none-any.whl
Algorithm Hash digest
SHA256 e7a8713f7630021aa49d75f72ce21d2294ae7ce2a2ca5c4b1137e084610ac783
MD5 efaed8cd86385594318b686347495c69
BLAKE2b-256 933f90665a30339c60ab74444741d2b9bb692f582b6b763f00077b4e3dc8b9eb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page