Skip to main content

No project description provided

Project description

Wyvern

Wyvern is a real-time machine learning platform for marketplaces

Homepage | Docs | Join Us On Slack

What is Wyvern?

Wyvern is a real-time machine learning platform for marketplaces:

  • Search and Discovery: Wyvern specializes in bringing use cases like recommendations and rankings in-house.
  • Empower the Data Team: Wyvern is tailored for your data team to independently build and deploy production-grade machine learning pipelines for the e-commerce and marketplace industry, reducing the engineering involvement in the entire process.
  • Orchestration for ML Pipelines: Wyvern is agnostic to the solutions your pick for your feature sotre, model serving solution, or data warehouse. It automates the process of retrieving data from the feature store and passing data to the model service, as well as logging all the events. It abstracts all the engineering work above away from data scientists, with the goal of enabling data scientists to own the full ML stack, so that they can just focus on defining the request/response schemas of the API, the model, the features the model depends on, the business logic after the model, and finally training the models with the feedback data generated by the ML pipeline.

More about why we built Wyvern can be found here.

Wyvern Architecture

Wyvern Architecture

Overall, Wyvern gives you a framework to quickly define your real-time ML pipeline. There are a couple of important components as you can see in the architecture:

  1. Retrieval: Wyvern can connect to and retrieve objects from your search index.
  2. Feature Module: Wyvern has the built-in support for feast, an open source feature store. It also supports connecting to the feature store that you would like to use. Moreover, Wyvern provides interfaces for you to define your batch features and real-time features easily, with the support of feature grouping, feature sharing, features for composite entities and request based features.
  3. Model Module: Wyvern provides the model interface that allows you to define your own model in place or call your model service. It provides an interface to define features that your model depends on easily.
  4. Business Logic: Wyvern makes defining your business logic easy after the model inference. For example, if your want to promote a specific brand of tshirt and move it to the top of the ranking result for the "tshirt" query.
  5. Observability and Event Logging: All the events in your ML application, including real-time feature, model, business logic, product impression, as well your own custom events, are being automatically logged by Wyvern and data can be piped to your data warehouse. Refer to Logging Events for more information.
  6. Training Dataset: Wyvern provides the feature store serving solution (currently integrated only with feast) to serve all data of the historical batch features and the real-time features that are logged in your data warehouse.

As Wyvern is open sourced, we will bring in more integrations with different feature stores, model serving solutions, search index for retrieval, observability tools, as well as integrations with more data warehouses.

Quickstart

Install Wyvern

pip install wyvern-ai

Create Wyvern Project

Once Wyvern is installed, run this command to initialize your Wyvern project:

wyvern init name_of_your_project

Now that the wyvern init has set up your initial repository, you should see the following file structure in the generated repository:

├── pipelines
│   ├── __init__.py
│   ├── main.py
│   ├── product_ranking
│   │   ├── __init__.py
│   │   ├── models.py
│   │   ├── ranking_pipline.py
│   │   ├── realtime_features.py
│   │   ├── schemas.py
├── feature-store-python
│   ├── features
│   │   ├── feature_store.yaml
│   │   ├── features.py
│   │   ├── main.py
├── .env
└── .gitignore

The generated template code is an ML pipeline example for ranking products.

Run Wyvern Application

Pre-requisite

  • Redis service

Wyvern uses redis as its index and online feature store. By default, Wyvern connects to the localhost:6379

Your can run this command to install and spin up redis locally

wyvern redis

Run Wyvern

Now cd into the repository that was generated and run:

wyvern run

Now your service runs on http://0.0.0.0:5001 and the default ranking API schema could be found at http://0.0.0.0:5001/redoc.

Make A Request

Assuming you have 24 products and you would like to rank them. Here's a curl request:

curl --location 'http://0.0.0.0:5001/api/v1/ranking' \
--header 'Content-Type: application/json' \
--data '{
    "request_id": "test_request_id",
    "query": {"query": "candle"},
    "candidates": [
        {"product_id": "p1"},
        {"product_id": "p2"},
        {"product_id": "p3"},
        {"product_id": "p4"},
        {"product_id": "p5"},
        {"product_id": "p6"},
        {"product_id": "p7"},
        {"product_id": "p8"},
        {"product_id": "p9"},
        {"product_id": "p10"},
        {"product_id": "p11"},
        {"product_id": "p12"},
        {"product_id": "p13"},
        {"product_id": "p14"},
        {"product_id": "p15"},
        {"product_id": "p16"},
        {"product_id": "p17"},
        {"product_id": "p18"},
        {"product_id": "p19"},
        {"product_id": "p20"},
        {"product_id": "p21"},
        {"product_id": "p22"},
        {"product_id": "p23"},
        {"product_id": "p24"}
    ],
    "user_page_size": 8,
    "user_page": 0,
    "candidate_page_size": 24,
    "candidate_page": 0
}'

The request sends 24 products to Wyvern. Wyvern ranks these products and returns the 8 products (in descending order) on the first page ("user_page": 0).

You should see a response with the products being ordered descendingly by their ranking score.

Click to see a response example
{
  "ranked_candidates": [
    {
      "candidate_id": "p9",
      "ranked_score": 43.13991415724884
    },
    {
      "candidate_id": "p18",
      "ranked_score": 42.314880208313376
    },
    {
      "candidate_id": "p17",
      "ranked_score": 41.62010469362527
    },
    {
      "candidate_id": "p3",
      "ranked_score": 40.48391586690772
    },
    {
      "candidate_id": "p19",
      "ranked_score": 39.82504624652922
    },
    {
      "candidate_id": "p24",
      "ranked_score": 39.10042317690844
    },
    {
      "candidate_id": "p11",
      "ranked_score": 38.670359237541945
    },
    {
      "candidate_id": "p21",
      "ranked_score": 37.27313489135458
    }
  ]
}

:tada:Congratulations on making your first Wyvern request!!!

To learn more about how this ML pipeline is built, check out Wyvern ML Pipeline

To learn more about Wyvern in general, check out our documentations

More Documentations

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wyvern_ai-0.0.18b5.tar.gz (68.5 kB view details)

Uploaded Source

Built Distribution

wyvern_ai-0.0.18b5-py3-none-any.whl (92.6 kB view details)

Uploaded Python 3

File details

Details for the file wyvern_ai-0.0.18b5.tar.gz.

File metadata

  • Download URL: wyvern_ai-0.0.18b5.tar.gz
  • Upload date:
  • Size: 68.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.11.5 Darwin/22.5.0

File hashes

Hashes for wyvern_ai-0.0.18b5.tar.gz
Algorithm Hash digest
SHA256 f1abf31ccad58c2880370fc383e3fa703265fa72b87f1ba87f71594e2e7c1dbb
MD5 9df1a1683cd23aa8c1ee4263b2af2482
BLAKE2b-256 31992094b89e7559392b0393041427fb298c5aa0d74b35f938eba1a69739ebea

See more details on using hashes here.

File details

Details for the file wyvern_ai-0.0.18b5-py3-none-any.whl.

File metadata

  • Download URL: wyvern_ai-0.0.18b5-py3-none-any.whl
  • Upload date:
  • Size: 92.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.11.5 Darwin/22.5.0

File hashes

Hashes for wyvern_ai-0.0.18b5-py3-none-any.whl
Algorithm Hash digest
SHA256 f7731e6332be7966c92d40fc2f29b60cc9fb674cb3fb6036426e958551933240
MD5 631ea3a4489645198a650452e70d1a0b
BLAKE2b-256 41d914b9c75919e5d755ea08de3d3f4217d4a3d0a245b463a9f4472b9288182c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page