Skip to main content

Hopsworks Python SDK to interact with Hopsworks Platform, Feature Store, Model Registry and Model Serving

Project description

Hopsworks Client

Hopsworks Community Hopsworks Documentation python PyPiStatus Scala/Java Artifacts Downloads Ruff License

hopsworks is the python API for interacting with a Hopsworks cluster. Don't have a Hopsworks cluster just yet? Register an account on Hopsworks SaaS and get started. Once connected to your project, you can:

  • Insert dataframes into the online or offline Store, create training datasets or serve real-time feature vectors in the Feature Store via the Feature Store API. Already have data somewhere you want to import, checkout our Storage Connectors documentation.
  • register ML models in the model registry and deploy them via model serving via the Machine Learning API.
  • manage environments, executions, kafka topics and more once you deploy your own Hopsworks cluster, either on-prem or in the cloud. Hopsworks is open-source and has its own Community Edition.

Our tutorials cover a wide range of use cases and example of what you can build using Hopsworks.

Getting Started On Hopsworks

Once you created a project on Hopsworks SaaS and created a new Api Key, just use your favourite virtualenv and package manager to install the library:

pip install "hopsworks[python]"

Fire up a notebook and connect to your project, you will be prompted to enter your newly created API key:

import hopsworks

project = hopsworks.login()

Feature Store API

Access the Feature Store of your project to use as a central repository for your feature data. Use your favourite data engineering library (pandas, polars, Spark, etc...) to insert data into the Feature Store, create training datasets or serve real-time feature vectors. Want to predict likelyhood of e-scooter accidents in real-time? Here's how you can do it:

fs = project.get_feature_store()

# Write to Feature Groups
bike_ride_fg = fs.get_or_create_feature_group(
  name="bike_rides",
  version=1,
  primary_key=["ride_id"],
  event_time="activation_time",
  online_enabled=True,
)

fg.insert(bike_rides_df)

# Read from Feature Views
profile_fg = fs.get_feature_group("user_profile", version=1)

bike_ride_fv = fs.get_or_create_feature_view(
  name="bike_rides_view",
  version=1,
  query=bike_ride_fg.select_except(["ride_id"]).join(profile_fg.select(["age", "has_license"]), on="user_id")
)

bike_rides_Q1_2021_df = bike_ride_fv.get_batch_data(
  start_date="2021-01-01",
  end_date="2021-01-31"
)

# Create a training dataset
version, job = bike_ride_fv.create_train_test_split(
    test_size=0.2,
    description='Description of a dataset',
    # you can have different data formats such as csv, tsv, tfrecord, parquet and others
    data_format='csv'
)

# Predict the probability of accident in real-time using new data + context data
bike_ride_fv.init_serving()

while True:
    new_ride_vector = poll_ride_queue()
    feature_vector = bike_ride_fv.get_online_feature_vector(
      {"user_id": new_ride_vector["user_id"]},
      passed_features=new_ride_vector
    )
    accident_probability = model.predict(feature_vector)

The API enables interaction with the Hopsworks Feature Store. It makes creating new features, feature groups and training datasets easy.

The API is environment independent and can be used in two modes:

  • Spark mode: For data engineering jobs that create and write features into the feature store or generate training datasets. It requires a Spark environment such as the one provided in the Hopsworks platform or Databricks. In Spark mode, HSFS provides bindings both for Python and JVM languages.

  • Python mode: For data science jobs to explore the features available in the feature store, generate training datasets and feed them in a training pipeline. Python mode requires just a Python interpreter and can be used both in Hopsworks from Python Jobs/Jupyter Kernels, Amazon SageMaker or KubeFlow.

Scala API is also available, here is a short sample of it:

import com.logicalclocks.hsfs._
val connection = HopsworksConnection.builder().build()
val fs = connection.getFeatureStore();
val attendances_features_fg = fs.getFeatureGroup("games_features", 1);
attendances_features_fg.show(1)

Machine Learning API

Or you can use the Machine Learning API to interact with the Hopsworks Model Registry and Model Serving. The API makes it easy to export, manage and deploy models. For example, to register models and deploy them for serving you can do:

mr = project.get_model_registry()
# or
ms = connection.get_model_serving()

# Create a new model:
model = mr.tensorflow.create_model(name="mnist",
                                   version=1,
                                   metrics={"accuracy": 0.94},
                                   description="mnist model description")
model.save("/tmp/model_directory") # or /tmp/model_file

# Download a model:
model = mr.get_model("mnist", version=1)
model_path = model.download()

# Delete the model:
model.delete()

# Get the best-performing model
best_model = mr.get_best_model('mnist', 'accuracy', 'max')

# Deploy the model:
deployment = model.deploy()
deployment.start()

# Make predictions with a deployed model
data = { "instances": [ model.input_example ] }
predictions = deployment.predict(data)

Usage

Usage data is collected for improving quality of the library. It is turned on by default if the backend is Hopsworks SaaS. To turn it off, use one of the following ways:

# use environment variable
import os
os.environ["ENABLE_HOPSWORKS_USAGE"] = "false"

# use `disable_usage_logging`
import hopsworks
hopsworks.disable_usage_logging()

The corresponding source code is in python/hopsworks_common/usage.py.

Tutorials

Need more inspiration or want to learn more about the Hopsworks platform? Check out our tutorials.

Documentation

Documentation is available at Hopsworks Documentation.

Issues

For general questions about the usage of Hopsworks and the Feature Store please open a topic on Hopsworks Community.

Please report any issue using Github issue tracking and attach the client environment from the output below to your issue:

import hopsworks
hopsworks.login()
print(hopsworks.get_sdk_info())

Contributing

If you would like to contribute to this library, please see the Contribution Guidelines.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hopsworks-4.7.4.tar.gz (578.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hopsworks-4.7.4-py3-none-any.whl (761.9 kB view details)

Uploaded Python 3

File details

Details for the file hopsworks-4.7.4.tar.gz.

File metadata

  • Download URL: hopsworks-4.7.4.tar.gz
  • Upload date:
  • Size: 578.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.10.12

File hashes

Hashes for hopsworks-4.7.4.tar.gz
Algorithm Hash digest
SHA256 8d77ef7f663119ba98d32aa2fb9fd58a201b48f28203e7b88bf9f5c764d2fa74
MD5 b06e5b6e7d6b09e81a5f3cc3dd281460
BLAKE2b-256 d1b54547d83eee9f7c7157e2120b7001a2e59fd571cf96fc36b1e57cc5b47a90

See more details on using hashes here.

File details

Details for the file hopsworks-4.7.4-py3-none-any.whl.

File metadata

  • Download URL: hopsworks-4.7.4-py3-none-any.whl
  • Upload date:
  • Size: 761.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.10.12

File hashes

Hashes for hopsworks-4.7.4-py3-none-any.whl
Algorithm Hash digest
SHA256 87c4947af958a409ef278ad7d16efef24d2e5ecc160eeddf3662f2c208b030d1
MD5 3ecbfb97f320f887048c94d5ed587e93
BLAKE2b-256 19f1024624bddd5134bd9d2c582dd0c119034dce06fe71db0f23e9fd02f7dcd8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page