Skip to main content

No project description provided

Project description

Github Banner

Relevance AI - The ML Platform for Unstructured Data Analysis

Documentation Status License

🌎 80% of data in the world is unstructured in the form of text, image, audio, videos, and more.

🔥 Use Relevance to unlock the value of your unstructured data:

  • ⚡ Quickly analyze unstructured data with pre-trained machine learning models in a few lines of code.
  • ✨ Visualize your unstructured data. Text highlights from Named entity recognition, Word cloud from keywords, Bounding box from images.
  • 📊 Create charts for both structured and unstructured.
  • 🔎 Drilldown with filters and similarity search to explore and find insights.
  • 🚀 Share data apps with your team.

Sign up for a free account ->

Relevance AI also acts as a platform for:

  • 🔑 Vectors, storing and querying vectors with flexible vector similarity search, that can be combined with multiple vectors, aggregates and filters.
  • 🔮 ML Dataset Evaluation, for debugging dataset labels, model outputs and surfacing edge cases.

🧠 Documentation

Type Link
Python API Documentation
Python Reference Documentation
Cloud Dashboard Documentation

🛠️ Installation

Using pip:

pip install -U relevanceai

Using conda:

conda install -c relevance relevanceai

⏩ Quickstart

Open In Colab

Login to relevanceai:

from relevanceai import Client

client = Client()

Prepare your documents for insertion by following the below format:

  • Each document should be a dictionary
  • Include a field _id as a primary key, otherwise it's automatically generated
  • Suffix vector fields with _vector_
docs = [
    {"_id": "1", "example_vector_": [0.1, 0.1, 0.1], "data": "Documentation"},
    {"_id": "2", "example_vector_": [0.2, 0.2, 0.2], "data": "Best document!"},
    {"_id": "3", "example_vector_": [0.3, 0.3, 0.3], "data": "document example"},
    {"_id": "4", "example_vector_": [0.4, 0.4, 0.4], "data": "this is another doc"},
    {"_id": "5", "example_vector_": [0.5, 0.5, 0.5], "data": "this is a doc"},
]

Insert data into a dataset

Create a dataset object with the name of the dataset you'd like to use. If it doesn't exist, it'll be created for you.

ds = client.Dataset("quickstart")
ds.insert_documents(docs)

Quick tip! Our Dataset object is compatible with common dataframes methods like .head(), .shape() and .info().

Perform vector search

query = [
    {"vector": [0.2, 0.2, 0.2], "field": "example_vector_"}
]
results = ds.search(
    vector_search_query=query,
    page_size=3,
)

Learn more about how to flexibly configure your vector search ->

Perform clustering

Generate clusters

clusterop = ds.cluster(vector_fields=["example_vector_"])
clusterop.list_closest()

Generate clusters with sklearn

from sklearn.cluster import AgglomerativeClustering

cluster_model = AgglomerativeClustering()
clusterop = ds.cluster(vector_fields=["example_vector_"], model=cluster_model, alias="agglomerative")
clusterop.list_closest()

Learn more about how to flexibly configure your clustering ->

🧰 Config

The config object contains the adjustable global settings for the SDK. For a description of all the settings, see here.

To view setting options, run the following:

client.config.options

The syntax for selecting an option is section.key. For example, to disable logging, run the following to modify logging.enable_logging:

client.config.set_option('logging.enable_logging', False)

To restore all options to their default, run the following:

Changing the base URL

You can change the base URL as such:

client.base_url = "https://.../latest"

🚧 Development

Getting Started

To get started with development, ensure you have pytest and mypy installed. These will help ensure typechecking and testing.

python -m pip install pytest mypy

Then run testing using:

Don't forget to set your test credentials!

export TEST_PROJECT = xxx
export TEST_API_KEY = xxx

python -m pytest
mypy relevanceai

Set up precommit

pip install precommit
pre-commit install

Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

RelevanceAI-3.2.17.tar.gz (300.3 kB view details)

Uploaded Source

Built Distribution

RelevanceAI-3.2.17-py3-none-any.whl (426.8 kB view details)

Uploaded Python 3

File details

Details for the file RelevanceAI-3.2.17.tar.gz.

File metadata

  • Download URL: RelevanceAI-3.2.17.tar.gz
  • Upload date:
  • Size: 300.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.7

File hashes

Hashes for RelevanceAI-3.2.17.tar.gz
Algorithm Hash digest
SHA256 1d93cce551c95b57f31df3ae0585d299ac16f107da5c3c9ae37463433d102fa4
MD5 7a9a76a535a13a5dd4752686e666d4cc
BLAKE2b-256 6da5dd2197fbd100485634430a9b8c1c82406a115449811f189446bdfac3a47b

See more details on using hashes here.

File details

Details for the file RelevanceAI-3.2.17-py3-none-any.whl.

File metadata

  • Download URL: RelevanceAI-3.2.17-py3-none-any.whl
  • Upload date:
  • Size: 426.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.7

File hashes

Hashes for RelevanceAI-3.2.17-py3-none-any.whl
Algorithm Hash digest
SHA256 bab96a5487843d9a5ae3015df2efec329580d8ded6eea80f78dc3ebb9b9f5be0
MD5 392a013214cd78553a632e8215795ba2
BLAKE2b-256 18917ebbf173f5ba75cc8de1ce65915695fd5163e38d3968dc47fb58552938c9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page