An Enterprise-Grade, High Performance Feature Store
Project description
Feathr – An Enterprise-Grade, High Performance Feature Store
What is Feathr?
Feathr lets you:
- Define features based on raw data sources, including time-series data, using simple APIs.
- Get those features by their names during model training and model inferencing.
- Share features across your team and company.
Feathr automatically computes your feature values and joins them to your training data, using point-in-time-correct semantics to avoid data leakage, and supports materializing and deploying your features for use online in production.
For more details, read our documentation.
Running Feathr on Azure with 3 Simple Steps
Feathr has native cloud integration. To use Feathr on Azure, you only need three steps:
- Get the
Principal ID
of your account by runningaz ad signed-in-user show --query objectId -o tsv
in the link below (Select "Bash" if asked), and write down that value (something likeb65ef2e0-42b8-44a7-9b55-abbccddeefff
). Think this ID as something representing you when accessing Azure, and it will be used to grant permissions in the next step in the UI.
- Click the button below to deploy a minimal set of Feathr resources for demo purpose. You will need to fill in the
Principal ID
andResource Prefix
. You will need "Owner" permission of the selected subscription.
- Run the Feathr Jupyter Notebook by clicking the button below. You only need to change the specified
Resource Prefix
.
Installing Feathr Client Locally
If you are not using the above Jupyter Notebook and want to install Feathr client locally, use this:
pip install -U feathr
Or use the latest code from GitHub:
pip install git+https://github.com/linkedin/feathr.git#subdirectory=feathr_project
Feathr Highlights
Defining Features with Transformation
features = [
Feature(name="f_trip_distance", # Ingest feature data as-is
feature_type=FLOAT),
Feature(name="f_is_long_trip_distance",
feature_type=BOOLEAN,
transform="cast_float(trip_distance)>30"), # SQL-like syntax to transform raw data into feature
Feature(name="f_day_of_week",
feature_type=INT32,
transform="dayofweek(lpep_dropoff_datetime)") # Provides built-in transformation
]
anchor = FeatureAnchor(name="request_features", # Features anchored on same source
source=batch_source,
features=features)
Rich UDF Support
Feathr has highly customizable UDFs with native PySpark and Spark SQL integration to lower learning curve for data scientists:
def add_new_dropoff_and_fare_amount_column(df: DataFrame):
df = df.withColumn("f_day_of_week", dayofweek("lpep_dropoff_datetime"))
df = df.withColumn("fare_amount_cents", df.fare_amount.cast('double') * 100)
return df
batch_source = HdfsSource(name="nycTaxiBatchSource",
path="abfss://feathrazuretest3fs@feathrazuretest3storage.dfs.core.windows.net/demo_data/green_tripdata_2020-04.csv",
preprocessing=add_new_dropoff_and_fare_amount_column,
event_timestamp_column="new_lpep_dropoff_datetime",
timestamp_format="yyyy-MM-dd HH:mm:ss")
Accessing Features
# Requested features to be joined
# Define the key for your feature
location_id = TypedKey(key_column="DOLocationID",
key_column_type=ValueType.INT32,
description="location id in NYC",
full_name="nyc_taxi.location_id")
feature_query = FeatureQuery(feature_list=["f_location_avg_fare"], key=[location_id])
# Observation dataset settings
settings = ObservationSettings(
observation_path="abfss://green_tripdata_2020-04.csv", # Path to your observation data
event_timestamp_column="lpep_dropoff_datetime", # Event timepstamp field for your data, optional
timestamp_format="yyyy-MM-dd HH:mm:ss") # Event timestamp format, optional
# Prepare training data by joining features to the input (observation) data.
# feature-join.conf and features.conf are detected and used automatically.
feathr_client.get_offline_features(observation_settings=settings,
output_path="abfss://output.avro",
feature_query=feature_query)
Deploy Features to Online (Redis) Store
client = FeathrClient()
redisSink = RedisSink(table_name="nycTaxiDemoFeature")
# Materialize two features into a redis table.
settings = MaterializationSettings("nycTaxiMaterializationJob",
sinks=[redisSink],
feature_names=["f_location_avg_fare", "f_location_max_fare"])
client.materialize_features(settings)
And get features from online store:
# Get features for a locationId (key)
client.get_online_features(feature_table = "agg_features",
key = "265",
feature_names = ['f_location_avg_fare', 'f_location_max_fare'])
# Batch get for multiple locationIds (keys)
client.multi_get_online_features(feature_table = "agg_features",
key = ["239", "265"],
feature_names = ['f_location_avg_fare', 'f_location_max_fare'])
Defining Window Aggregation Features
agg_features = [Feature(name="f_location_avg_fare",
key=location_id, # Query/join key of the feature(group)
feature_type=FLOAT,
transform=WindowAggTransformation( # Window Aggregation transformation
agg_expr="cast_float(fare_amount)",
agg_func="AVG", # Apply average aggregation over the window
window="90d")), # Over a 90-day window
]
agg_anchor = FeatureAnchor(name="aggregationFeatures",
source=batch_source,
features=agg_features)
Defining Named Data Sources
batch_source = HdfsSource(
name="nycTaxiBatchSource", # Source name to enrich your metadata
path="abfss://green_tripdata_2020-04.csv", # Path to your data
event_timestamp_column="lpep_dropoff_datetime", # Event timestamp for point-in-time correctness
timestamp_format="yyyy-MM-dd HH:mm:ss") # Supports various fromats inculding epoch
Beyond Features on Raw Data Sources - Derived Features
# Compute a new feature(a.k.a. derived feature) on top of an existing feature
derived_feature = DerivedFeature(name="f_trip_time_distance",
feature_type=FLOAT,
key=trip_key,
input_features=[f_trip_distance, f_trip_time_duration],
transform="f_trip_distance * f_trip_time_duration")
# Another example to compute embedding similarity
user_embedding = Feature(name="user_embedding", feature_type=DENSE_VECTOR, key=user_key)
item_embedding = Feature(name="item_embedding", feature_type=DENSE_VECTOR, key=item_key)
user_item_similarity = DerivedFeature(name="user_item_similarity",
feature_type=FLOAT,
key=[user_key, item_key],
input_features=[user_embedding, item_embedding],
transform="cosine_similarity(user_embedding, item_embedding)")
Running Feathr Examples
Follow the quick start Jupyter Notebook to try it out. There is also a companion quick start guide containing a bit more explanation on the notebook.
Cloud Integrations
Feathr component | Cloud Integrations |
---|---|
Offline store – Object Store | Azure Blob Storage, Azure ADLS Gen2, AWS S3 |
Offline store – SQL | Azure SQL DB, Azure Synapse Dedicated SQL Pools, Azure SQL in VM, Snowflake |
Online store | Azure Cache for Redis |
Feature Registry | Azure Purview |
Compute Engine | Azure Synapse Spark Pools, Databricks |
Machine Learning Platform | Azure Machine Learning, Jupyter Notebook |
File Format | Parquet, ORC, Avro, Delta Lake |
Roadmap
Public Preview
release may introduce API changes.
- Private Preview release
- Public Preview release
- Future release
- Support streaming and online transformation
- Support feature versioning
- Support more data sources
Community Guidelines
Build for the community and build by the community. Check out Community Guidelines.
Slack Channel
Join our Slack channel for questions and discussions (or click the invitation link).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.