Skip to main content

A scalable feature store that makes it easy to align offline and online ML systems

Project description

Aligned

Aligned helps improving ML system visibility, while also reducing technical, and data debt, as described in Sculley et al. [2015].

Want to look at examples of how to use aligned? View the MatsMoll/aligned-example repo.

This is done by providing an new innovative way of describing feature transformations, and data flow in ML systems. While also collecting dependency metadata that would otherwise be too inconvenient and error prone to manually type out.

Therefore, you get the following:

All from the simple API of defining

As a result, loading model features is as easy as:

entities = {"passenger_id": [1, 2, 3, 4]}
await store.model("titanic").features_for(entities).to_pandas()

Aligned is still in active development, so changes are likely.

Feature Views

Write features as the should be, as data models. Then get code completion and typesafety by referencing them in other features.

This makes the features light weight, data source indipendent, and flexible.

class TitanicPassenger(FeatureView):

    metadata = FeatureView.metadata_with(
        name="passenger",
        description="Some features from the titanic dataset",
        batch_source=FileSource.csv_at("titanic.csv"),
        stream_source=HttpStreamSource(topic_name="titanic")
    )

    passenger_id = Int32().as_entity()

    # Input values
    age = (
        Float()
            .description("A float as some have decimals")
            .is_required()
            .lower_bound(0)
            .upper_bound(110)
    )

    name = String()
    sex = String().accepted_values(["male", "female"])
    survived = Bool().description("If the passenger survived")
    sibsp = Int32().lower_bound(0, is_inclusive=True).description("Number of siblings on titanic")
    cabin = String()

    # Creates two one hot encoded values
    is_male, is_female = sex.one_hot_encode(['male', 'female'])

Data sources

Alinged makes handling data sources easy, as you do not have to think about how it is done. Only define where the data is, and we handle the dirty work.

my_db = PostgreSQLConfig(env_var="DATABASE_URL")
redis = RedisConfig(env_var="REDIS_URL")

class TitanicPassenger(FeatureView):

    metadata = FeatureView.metadata_with(
        name="passenger",
        description="Some features from the titanic dataset",
        batch_source=my_db.table(
            "passenger",
            mapping_keys={
                "Passenger_Id": "passenger_id"
            }
        ),
        stream_source=redis.stream(topic="titanic")
    )

    passenger_id = Int32().as_entity()

Fast development

Making iterativ and fast exploration in ML is important. This is why Aligned also makes it super easy to combine, and test multiple sources.

my_db = PostgreSQLConfig.localhost()

aws_bucket = AwsS3Config(...)

class SomeFeatures(FeatureView):

    metadata = FeatureViewMetadata(
        name="some_features",
        description="...",
        batch_source=my_db.table("local_features")
    )

    # Some features
    ...

class AwsFeatures(FeatureView):

    metadata = FeatureViewMetadata(
        name="aws",
        description="...",
        batch_source=aws_bucket.file_at("path/to/file.parquet")
    )

    # Some features
    ...

Describe Models

Usually will you need to combine multiple features for each model. This is where a Model comes in. Here can you define which features should be exposed.

class Titanic(Model):

    passenger = TitanicPassenger()
    location = LocationFeatures()

    metadata = Model.metadata_with(
        name="titanic",
        features=[
            passenger.constant_filled_age,
            passenger.ordinal_sex,
            passenger.sibsp,

            location.distance_to_shore,
            location.distance_to_closest_boat
        ]
    )

    # Referencing the passenger's survived feature as the target
    did_survive = passenger.survived.as_classification_target()

Data Enrichers

In manny cases will extra data be needed in order to generate some features. We therefore need some way of enriching the data. This can easily be done with Alinged's DataEnrichers.

my_db = PostgreSQLConfig.localhost()
redis = RedisConfig.localhost()

user_location = my_db.data_enricher( # Fetch all user locations
    sql="SELECT * FROM user_location"
).cache( # Cache them for one day
    ttl=timedelta(days=1),
    cache_key="user_location_cache"
).lock( # Make sure only one processer fetches the data at a time
    lock_name="user_location_lock",
    redis_config=redis
)


async def distance_to_users(df: DataFrame) -> Series:
    user_location_df = await user_location.load()
    ...
    return distances

class SomeFeatures(FeatureView):

    metadata = FeatureViewMetadata(...)

    latitude = Float()
    longitude = Float()

    distance_to_users = Float().transformed_using_features_pandas(
        [latitude, longitude],
        distance_to_users
    )

Access Data

You can easily create a feature store that contains all your feature definitions. This can then be used to genreate data sets, setup an instce to serve features, DAG's etc.

store = await FileSource.json_at("./feature-store.json").feature_store()

# Select all features from a single feature view
df = await store.all_for("passenger", limit=100).to_pandas()

Centraliced Feature Store Definition

You would often share the features with other coworkers, or split them into different stages, like staging, shadow, or production. One option is therefore to reference the storage you use, and load the FeatureStore from there.

aws_bucket = AwsS3Config(...)
store = await aws_bucket.json_at("production.json").feature_store()

# This switches from the production online store to the offline store
# Aka. the batch sources defined on the feature views
experimental_store = store.offline_store()

This json file can be generated by running aligned apply.

Select multiple feature views

df = await store.features_for({
    "passenger_id": [1, 50, 110]
}, features=[
    "passenger:scaled_age",
    "passenger:is_male",
    "passenger:sibsp"

    "other_features:distance_to_closest_boat",
]).to_polars()

Model Service

Selecting features for a model is super simple.

df = await store.model("titanic_model").features_for({
    "passenger_id": [1, 50, 110]
}).to_pandas()

Feature View

If you want to only select features for a specific feature view, then this is also possible.

prev_30_days = await store.feature_view("match").previous(days=30).to_pandas()
sample_of_20 = await store.feature_view("match").all(limit=20).to_pandas()

Data quality

Alinged will make sure all the different features gets formatted as the correct datatype. In addition will aligned also make sure that the returend features aligne with defined constraints.

class TitanicPassenger(FeatureView):

    ...

    age = (
        Float()
            .is_required()
            .lower_bound(0)
            .upper_bound(110)
    )
    sibsp = Int32().lower_bound(0, is_inclusive=True)

Then since our feature view have a is_required and a lower_bound, will the .validate(...) command filter out the entites that do not follow that behavior.

from aligned.validation.pandera import PanderaValidator

df = await store.model("titanic_model").features_for({
    "passenger_id": [1, 50, 110]
}).validate(
    PanderaValidator()  # Validates all features
).to_pandas()

Feature Server

You can define how to serve your features with the FeatureServer. Here can you define where you want to load, and potentially write your features to.

By default will it aligned look for a file called server.py, and a FeatureServer object called server. However, this can be defined manually as well.

from aligned import RedisConfig, FileSource
from aligned.schemas.repo_definition import FeatureServer

store = FileSource.json_at("feature-store.json")

server = FeatureServer.from_reference(
    store,
    RedisConfig.localhost()
)

Then run aligned serve, and a FastAPI server will start. Here can you push new features, which then transforms and stores the features, or just fetch them.

Stream Worker

You can also setup stream processing with a similar structure. However, here will a StreamWorker be used.

by default will aligned look for a worker.py file with an object called worker. An example would be the following.

from aligned import RedisConfig, FileSource
from aligned.schemas.repo_definition import FeatureServer

store = FileSource.json_at("feature-store.json")

server = FeatureServer.from_reference(
    store,
    RedisConfig.localhost()
)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aligned-0.0.22.tar.gz (112.8 kB view details)

Uploaded Source

Built Distribution

aligned-0.0.22-py3-none-any.whl (144.3 kB view details)

Uploaded Python 3

File details

Details for the file aligned-0.0.22.tar.gz.

File metadata

  • Download URL: aligned-0.0.22.tar.gz
  • Upload date:
  • Size: 112.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.9 Linux/5.15.0-1042-azure

File hashes

Hashes for aligned-0.0.22.tar.gz
Algorithm Hash digest
SHA256 832ce5d02e637ca46dc720cd87a35ace6e0ac1e20d82811a145743302fcd4c71
MD5 1c9c3f2840e561af297bf1257813c8c7
BLAKE2b-256 4797ea5b20e59642de337d113958c9d2098f5f29f41875edae2063da44817f5d

See more details on using hashes here.

File details

Details for the file aligned-0.0.22-py3-none-any.whl.

File metadata

  • Download URL: aligned-0.0.22-py3-none-any.whl
  • Upload date:
  • Size: 144.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.9 Linux/5.15.0-1042-azure

File hashes

Hashes for aligned-0.0.22-py3-none-any.whl
Algorithm Hash digest
SHA256 d11a4ef93f04acb9305574525ff6e5e9fee10c0f83c951cf6b6a9a2bac1c685f
MD5 b65e0e0e1451022916268b5100a5f05a
BLAKE2b-256 a75d031e91f951e16c6a8f157f8c4edbd2219910b94d9915e6c3f563d5a2b261

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page