Skip to main content

A scalable feature store that makes it easy to align offline and online ML systems

Project description

Aligned

Aligned help defining a single source of truth for logic while keeping the technology stack flexible. Such innovation has been possible by removing the need to depend on a processing engine, leading to less- and more transparent- code. Furthermore, the declarative API has made it possible to comment, add data validation, and define feature transformation at the same location. Therefore, it leads to a precise definition of the intended result.

Main advantages:

  • Test new features faster
  • Adapt faster to new technical and business requirements.
  • Stop technology lock-in, like processing engines and infrastructure.
  • Stop vendor lock-in. Deploy to any provider that fits you

As a result, loading model featurs can be done with the following code.

await store.model("titanic").features_for(entities).as_pandas()

Read the post about how the most elegant MLOps tool was created

Also check out the the example repo to see how it can be used

Aligned is still in actice development, so changes are likely.

Feature Views

Write features as the should be, as data models. Then get code completion and typesafety by referencing them in other features.

This makes the features light weight, data source indipendent, and flexible.

class TitanicPassenger(FeatureView):

    metadata = FeatureViewMetadata(
        name="passenger",
        description="Some features from the titanic dataset",
        batch_source=FileSource.csv_at("titanic.csv"),
        stream_source=HttpStreamSource(topic_name="titanic")
    )

    passenger_id = Entity(dtype=Int32())

    # Input values
    age = (
        Float()
            .description("A float as some have decimals")
            .is_required()
            .lower_bound(0)
            .upper_bound(110)
    )

    name = String()
    sex = String().accepted_values(["male", "female"])
    survived = Bool().description("If the passenger survived")
    sibsp = Int32().lower_bound(0, is_inclusive=True).description("Number of siblings on titanic")
    cabin = String()

    # Creates two one hot encoded values
    is_male, is_female = sex.one_hot_encode(['male', 'female'])

    # Standard scale the age.
    # This will fit the scaler using a data slice from the batch source
    # limited to maximum 100 rows. We can also uese a time constraint if wanted
    scaled_age = age.standard_scaled(limit=100)

Data sources

Alinged makes handling data sources easy, as you do not have to think about how it is done. Only define where the data is, and we handle the dirty work.

my_db = PostgreSQLConfig(env_var="DATABASE_URL")

class TitanicPassenger(FeatureView):

    metadata = FeatureViewMetadata(
        name="passenger",
        description="Some features from the titanic dataset",
        batch_source=my_db.table(
            "passenger",
            mapping_keys={
                "Passenger_Id": "passenger_id"
            }
        ),
        stream_source=HttpStreamSource(topic_name="titanic")
    )

    passenger_id = Entity(dtype=Int32())

Fast development

Making iterativ and fast exploration in ML is important. This is why Aligned also makes it super easy to combine, and test multiple sources.

my_db = PostgreSQLConfig.localhost()

aws_bucket = AwsS3Config(...)

class SomeFeatures(FeatureView):

    metadata = FeatureViewMetadata(
        name="some_features",
        description="...",
        batch_source=my_db.table("local_features")
    )

    # Some features
    ...

class AwsFeatures(FeatureView):

    metadata = FeatureViewMetadata(
        name="aws",
        description="...",
        batch_source=aws_bucket.file_at("path/to/file.parquet")
    )

    # Some features
    ...

Model Service

Usually will you need to combine multiple features for each model. This is where a ModelService comes in. Here can you define which features should be exposed.

# Uses the variable name, as the model service name.
# Can also define a custom name, if wanted.
titanic_model = ModelService(
    features=[
        TitanicPassenger.select_all(),

        # Select features with code completion
        LocationFeatures.select(lambda view: [
            view.distance_to_shore,
            view.distance_to_closest_boat
        ]),
    ]
)

Data Enrichers

In manny cases will extra data be needed in order to generate some features. We therefore need some way of enriching the data. This can easily be done with Alinged's DataEnrichers.

my_db = PostgreSQLConfig.localhost()
redis = RedisConfig.localhost()

user_location = my_db.data_enricher( # Fetch all user locations
    sql="SELECT * FROM user_location"
).cache( # Cache them for one day
    ttl=timedelta(days=1),
    cache_key="user_location_cache"
).lock( # Make sure only one processer fetches the data at a time
    lock_name="user_location_lock",
    redis_config=redis
)


async def distance_to_users(df: DataFrame) -> Series:
    user_location_df = await user_location.load()
    ...
    return distances

class SomeFeatures(FeatureView):

    metadata = FeatureViewMetadata(...)

    latitude = Float()
    longitude = Float()

    distance_to_users = Float().transformed(distance_to_users, using_features=[latitude, longitude])

Access Data

You can easily create a feature store that contains all your feature definitions. This can then be used to genreate data sets, setup an instce to serve features, DAG's etc.

store = FeatureStore.from_dir(".")

# Select all features from a single feature view
df = await store.all_for("passenger", limit=100).to_df()

Centraliced Feature Store Definition

You would often share the features with other coworkers, or split them into different stages, like staging, shadow, or production. One option is therefore to reference the storage you use, and load the FeatureStore from there.

aws_bucket = AwsS3Config(...)
store = await aws_bucket.file_at("production.json").feature_store()

# This switches from the production online store to the offline store
# Aka. the batch sources defined on the feature views
experimental_store = store.offline_store()

This json file can be generated by running alinged apply.

Select multiple feature views

df = await store.features_for({
    "passenger_id": [1, 50, 110]
}, features=[
    "passenger:scaled_age",
    "passenger:is_male",
    "passenger:sibsp"

    "other_features:distance_to_closest_boat",
]).to_df()

Model Service

Selecting features for a model is super simple.

df = await store.model("titanic_model").features_for({
    "passenger_id": [1, 50, 110]
}).to_df()

Feature View

If you want to only select features for a specific feature view, then this is also possible.

prev_30_days = await store.feature_view("match").previous(days=30).to_df()
sample_of_20 = await store.feature_view("match").all(limit=20).to_df()

Data quality

Alinged will make sure all the different features gets formatted as the correct datatype. In addition will aligned also make sure that the returend features aligne with defined constraints.

class TitanicPassenger(FeatureView):

    ...

    age = (
        Float()
            .is_required()
            .lower_bound(0)
            .upper_bound(110)
    )
    sibsp = Int32().lower_bound(0, is_inclusive=True)

Then since our feature view have a is_required and a lower_bound, will the .validate(...) command filter out the entites that do not follow that behavior.

from aligned.validation.pandera import PanderaValidator

df = await store.model("titanic_model").features_for({
    "passenger_id": [1, 50, 110]
}).validate(
    PanderaValidator()  # Validates all features
).to_df()

Feature Server

This expectes that you either run the command in your feature store repo, or have a file with a RepoReference instance. You can also setup an online source like Redis, for faster storage.

redis = RedisConfig.localhost()

aws_bucket = AwsS3Config(...)

repo_files = RepoReference(
    env_var_name="ENVIRONMENT",
    repo_paths={
        "production": aws_bucket.file_at("feature-store/production.json"),
        "shadow": aws_bucket.file_at("feature-store/shadow.json"),
        "staging": aws_bucket.file_at("feature-store/staging.json")
        # else generate the feature store from the current dir
    }
)

# Use redis as the online source, if not running localy
if repo_files.selected != "local":
    online_source = redis.online_source()

Then run aligned serve, and a FastAPI server will start. Here can you push new features, which then transforms and stores the features, or just fetch them.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aligned-0.0.8.tar.gz (100.1 kB view details)

Uploaded Source

Built Distribution

aligned-0.0.8-py3-none-any.whl (127.5 kB view details)

Uploaded Python 3

File details

Details for the file aligned-0.0.8.tar.gz.

File metadata

  • Download URL: aligned-0.0.8.tar.gz
  • Upload date:
  • Size: 100.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.2 CPython/3.10.9 Linux/5.15.0-1036-azure

File hashes

Hashes for aligned-0.0.8.tar.gz
Algorithm Hash digest
SHA256 bdfc1b37506b6c8cb3688f5d942d6f4dc24fefbe4a3dc41efa7273f706298646
MD5 198eec2e113fd1791947a3c6f8d02325
BLAKE2b-256 c5312266f3c3c2c82b8259c4fceef5152c3ced737757714b6fbd4ec1fc1ef71b

See more details on using hashes here.

File details

Details for the file aligned-0.0.8-py3-none-any.whl.

File metadata

  • Download URL: aligned-0.0.8-py3-none-any.whl
  • Upload date:
  • Size: 127.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.2 CPython/3.10.9 Linux/5.15.0-1036-azure

File hashes

Hashes for aligned-0.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 d70f8ab664c3247507f223a002e4c4a02e7a06c3f38d3c2be3bbbc1e8c59fd72
MD5 335371824993fceaca2c9d0004f9af3d
BLAKE2b-256 795f540d8e67c6723286de4bd02e4cf8bd07329c79f2d1cc9a98041581dad05f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page