Skip to main content

No project description provided

Project description

Wyvern

Wyvern is a real-time machine learning platform for marketplaces

Homepage | Docs | Join Us On Slack

What is Wyvern?

Wyvern is a real-time machine learning platform for marketplaces:

  • Search and Discovery: Wyvern specializes in bringing use cases like recommendations and rankings in-house.
  • Empower the Data Team: Wyvern is tailored for your data team to independently build and deploy production-grade machine learning pipelines for the e-commerce and marketplace industry, reducing the engineering involvement in the entire process.
  • Orchestration for ML Pipelines: Wyvern is agnostic to the solutions your pick for your feature sotre, model serving solution, or data warehouse. It automates the process of retrieving data from the feature store and passing data to the model service, as well as logging all the events. It abstracts all the engineering work above away from data scientists, with the goal of enabling data scientists to own the full ML stack, so that they can just focus on defining the request/response schemas of the API, the model, the features the model depends on, the business logic after the model, and finally training the models with the feedback data generated by the ML pipeline.

More about why we built Wyvern can be found here.

Wyvern Architecture

Wyvern Architecture

Overall, Wyvern gives you a framework to quickly define your real-time ML pipeline. There are a couple of important components as you can see in the architecture:

  1. Retrieval: Wyvern can connect to and retrieve objects from your search index.
  2. Feature Module: Wyvern has the built-in support for feast, an open source feature store. It also supports connecting to the feature store that you would like to use. Moreover, Wyvern provides interfaces for you to define your batch features and real-time features easily, with the support of feature grouping, feature sharing, features for composite entities and request based features.
  3. Model Module: Wyvern provides the model interface that allows you to define your own model in place or call your model service. It provides an interface to define features that your model depends on easily.
  4. Business Logic: Wyvern makes defining your business logic easy after the model inference. For example, if your want to promote a specific brand of tshirt and move it to the top of the ranking result for the "tshirt" query.
  5. Observability and Event Logging: All the events in your ML application, including real-time feature, model, business logic, product impression, as well your own custom events, are being automatically logged by Wyvern and data can be piped to your data warehouse. Refer to Logging Events for more information.
  6. Training Dataset: Wyvern provides the feature store serving solution (currently integrated only with feast) to serve all data of the historical batch features and the real-time features that are logged in your data warehouse.

As Wyvern is open sourced, we will bring in more integrations with different feature stores, model serving solutions, search index for retrieval, observability tools, as well as integrations with more data warehouses.

Quickstart

Install Wyvern

pip install wyvern-ai

Create Wyvern Project

Once Wyvern is installed, run this command to initialize your Wyvern project:

wyvern init name_of_your_project

Now that the wyvern init has set up your initial repository, you should see the following file structure in the generated repository:

├── pipelines
│   ├── __init__.py
│   ├── main.py
│   ├── product_ranking
│   │   ├── __init__.py
│   │   ├── models.py
│   │   ├── ranking_pipline.py
│   │   ├── realtime_features.py
│   │   ├── schemas.py
├── feature-store-python
│   ├── features
│   │   ├── feature_store.yaml
│   │   ├── features.py
│   │   ├── main.py
├── .env
└── .gitignore

The generated template code is an ML pipeline example for ranking products.

Run Wyvern Application

Pre-requisite

  • Redis service

Wyvern uses redis as its index and online feature store. By default, Wyvern connects to the localhost:6379

Your can run this command to install and spin up redis locally

wyvern redis

Run Wyvern

Now cd into the repository that was generated and run:

wyvern run

Now your service runs on http://0.0.0.0:5001 and the default ranking API schema could be found at http://0.0.0.0:5001/redoc.

Make A Request

Assuming you have 24 products and you would like to rank them. Here's a curl request:

curl --location 'http://0.0.0.0:5001/api/v1/ranking' \
--header 'Content-Type: application/json' \
--data '{
    "request_id": "test_request_id",
    "query": {"query": "candle"},
    "candidates": [
        {"product_id": "p1"},
        {"product_id": "p2"},
        {"product_id": "p3"},
        {"product_id": "p4"},
        {"product_id": "p5"},
        {"product_id": "p6"},
        {"product_id": "p7"},
        {"product_id": "p8"},
        {"product_id": "p9"},
        {"product_id": "p10"},
        {"product_id": "p11"},
        {"product_id": "p12"},
        {"product_id": "p13"},
        {"product_id": "p14"},
        {"product_id": "p15"},
        {"product_id": "p16"},
        {"product_id": "p17"},
        {"product_id": "p18"},
        {"product_id": "p19"},
        {"product_id": "p20"},
        {"product_id": "p21"},
        {"product_id": "p22"},
        {"product_id": "p23"},
        {"product_id": "p24"}
    ],
    "user_page_size": 8,
    "user_page": 0,
    "candidate_page_size": 24,
    "candidate_page": 0
}'

The request sends 24 products to Wyvern. Wyvern ranks these products and returns the 8 products (in descending order) on the first page ("user_page": 0).

You should see a response with the products being ordered descendingly by their ranking score.

Click to see a response example
{
  "ranked_candidates": [
    {
      "candidate_id": "p9",
      "ranked_score": 43.13991415724884
    },
    {
      "candidate_id": "p18",
      "ranked_score": 42.314880208313376
    },
    {
      "candidate_id": "p17",
      "ranked_score": 41.62010469362527
    },
    {
      "candidate_id": "p3",
      "ranked_score": 40.48391586690772
    },
    {
      "candidate_id": "p19",
      "ranked_score": 39.82504624652922
    },
    {
      "candidate_id": "p24",
      "ranked_score": 39.10042317690844
    },
    {
      "candidate_id": "p11",
      "ranked_score": 38.670359237541945
    },
    {
      "candidate_id": "p21",
      "ranked_score": 37.27313489135458
    }
  ]
}

:tada:Congratulations on making your first Wyvern request!!!

To learn more about how this ML pipeline is built, check out Wyvern ML Pipeline

To learn more about Wyvern in general, check out our documentations

More Documentations

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wyvern_ai-0.0.18b4.tar.gz (67.1 kB view details)

Uploaded Source

Built Distribution

wyvern_ai-0.0.18b4-py3-none-any.whl (90.7 kB view details)

Uploaded Python 3

File details

Details for the file wyvern_ai-0.0.18b4.tar.gz.

File metadata

  • Download URL: wyvern_ai-0.0.18b4.tar.gz
  • Upload date:
  • Size: 67.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.11.5 Darwin/22.5.0

File hashes

Hashes for wyvern_ai-0.0.18b4.tar.gz
Algorithm Hash digest
SHA256 c6062b527311efeb9fd6e0605a14791595d2744cd7b01dab2181f4c08e5a70f2
MD5 e65c7a265dc7df985b3971719155bd1a
BLAKE2b-256 272390daa6ff49f7fa5429c5680923a3e848da354de5b5ee1bc408fac3f0693c

See more details on using hashes here.

File details

Details for the file wyvern_ai-0.0.18b4-py3-none-any.whl.

File metadata

  • Download URL: wyvern_ai-0.0.18b4-py3-none-any.whl
  • Upload date:
  • Size: 90.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.11.5 Darwin/22.5.0

File hashes

Hashes for wyvern_ai-0.0.18b4-py3-none-any.whl
Algorithm Hash digest
SHA256 e26e7a77aa9529d852deddc85edce13d846acf534ba992d4b5e7fab9631e6aa0
MD5 ce97db8fd9dc2d57f4033c1b27f5bcce
BLAKE2b-256 0d4f2b2d966dce993a6c18caaa860e5cc53682bae6377e814dc506c0cd28f21c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page