Skip to main content

The Superlinked vector computing library

Project description

 

PyPI Last commit License

Why use Superlinked

Improve your vector search relevance by encoding your metadata together with your data into your vector embeddings.

What is Superlinked

Superlinked is a framework AND a self-hostable REST API server that helps you make better vectors, that sits between your data, vector database and backend services.

How does it work

Superlinked makes it easy to construct custom data & query embedding models from pre-trained encoders, see the feature and use-case notebooks below for examples.

If you like what we do, give us a star! ⭐

Visit Superlinked for more information about the company behind this product and our other initiatives.

Features

You can check a full list of our features or head to our reference section for more information.

Use-cases

Dive deeper with our notebooks into how each use-case benefits from the Superlinked framework.

You can check a full list of examples here.

Experiment in a notebook

Example on combining Text with Numerical encoders to get correct results with LLMs.

Install the superlinked library

%pip install superlinked

Run the example:

First run will take slightly longer as it has to download the embedding model.

import json

from superlinked.framework.common.nlq.open_ai import OpenAIClientConfig
from superlinked.framework.common.parser.dataframe_parser import DataFrameParser
from superlinked.framework.common.schema.schema import schema
from superlinked.framework.common.schema.schema_object import Integer, String
from superlinked.framework.common.schema.id_schema_object import IdField
from superlinked.framework.common.space.config.embedding.number_embedding_config import Mode
from superlinked.framework.dsl.space.number_space import NumberSpace
from superlinked.framework.dsl.space.text_similarity_space import TextSimilaritySpace
from superlinked.framework.dsl.index.index import Index
from superlinked.framework.dsl.query.param import Param
from superlinked.framework.dsl.query.query import Query
from superlinked.framework.dsl.source.in_memory_source import InMemorySource
from superlinked.framework.dsl.executor.in_memory.in_memory_executor import (
    InMemoryExecutor,
)

@schema
class Review:
    id: IdField
    review_text: String
    rating: Integer


review = Review()

review_text_space = TextSimilaritySpace(
    text=review.review_text, model="Alibaba-NLP/gte-large-en-v1.5"
)
rating_maximizer_space = NumberSpace(
    number=review.rating, min_value=1, max_value=5, mode=Mode.MAXIMUM
)
index = Index([review_text_space, rating_maximizer_space], fields=[review.rating])

# fill this with your API key - this will drive param extraction
openai_config = OpenAIClientConfig(
    api_key="YOUR_OPENAI_API_KEY", model="gpt-4o"
)

# it is possible now to add descriptions to a `Param` to aid the parsing of information from natural language queries.
text_similar_param = Param(
    "query_text",
    description="The text in the user's query that is used to search in the reviews' body. Extract info that does apply to other spaces or params.",
)

# Define your query using dynamic parameters for query text and weights.
# we will have our LLM fill them based on our natural language query
query = (
    Query(
        index,
        weights={
            review_text_space: Param("review_text_weight"),
            rating_maximizer_space: Param("rating_maximizer_weight"),
        },
    )
    .find(review)
    .similar(
        review_text_space,
        text_similar_param,
    )
    .limit(Param("limit"))
    .with_natural_query(Param("natural_query"), openai_config)
)

# Run the app.
source: InMemorySource = InMemorySource(review)
executor = InMemoryExecutor(sources=[source], indices=[index])
app = executor.run()

# Download dataset.
data = [
    {"id": 1, "review_text": "Useless product", "rating": 1},
    {"id": 2, "review_text": "Great product I am so happy!", "rating": 5},
    {"id": 3, "review_text": "Mediocre stuff fits the purpose", "rating": 3},
]

# Ingest data to the framework.
source.put(data)

result = app.query(query, natural_query="Show me the best product", limit=1)

# examine the extracted parameters from your query
print(json.dumps(result.knn_params, indent=2))
# the result is the 5 star rated product
result.to_pandas()

Run in production

Superlinked Server allows you to leverage the power of Superlinked in deployable projects. With a single script, you can deploy a Superlinked-powered app instance that creates REST endpoints and connects to external Vector Databases. This makes it an ideal solution for those seeking an easy-to-deploy environment for their Superlinked projects.

If your are interested in learning more about running at scale, Book a demo for an early access to our managed cloud.

Supported VDBs

We have started partnering with vector database providers to allow you to use Superlinked with your VDB of choice. If you are unsure, which VDB to chose, check-out our Vector DB Comparison.

Missing your favorite VDB? Tell us which vector database we should support next!

Reference

  1. Describe your data using Python classes with the @schema decorator.
  2. Describe your vector embeddings from building blocks with Spaces.
  3. Combine your embeddings into a queryable Index.
  4. Define your search with dynamic parameters and weights as a Query.
  5. Load your data using a Source.
  6. Define your transformations with a Parser (e.g.: from pd.DataFrame).
  7. Run your configuration with an Executor.

You can check all references here.

Logging

Contextual information is automatically included in log messages, such as the process ID and package scope. Personally Identifiable Information (PII) is filtered out by default but can be exposed with the SUPERLINKED_EXPOSE_PII environment variable to true.

Resources

  • Vector DB Comparison: Open-source collaborative comparison of vector databases by Superlinked.
  • Vector Hub: VectorHub is a free and open-sourced learning hub for people interested in adding vector retrieval to their ML stack

Support

If you encounter any challenges during your experiments, feel free to create an issue, request a feature or to start a discussion. Make sure to group your feedback in separate issues and discussions by topic. Thank you for your feedback!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

superlinked-12.9.0.tar.gz (167.5 kB view details)

Uploaded Source

Built Distribution

superlinked-12.9.0-py3-none-any.whl (411.4 kB view details)

Uploaded Python 3

File details

Details for the file superlinked-12.9.0.tar.gz.

File metadata

  • Download URL: superlinked-12.9.0.tar.gz
  • Upload date:
  • Size: 167.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for superlinked-12.9.0.tar.gz
Algorithm Hash digest
SHA256 9f6cd2b4df2f609bf00f774c2049586c714bd2f4aae0c22ae957b0a84f124fea
MD5 80829951fce4cee739924c0328dedc5a
BLAKE2b-256 c6a3b77180c147160f45a6701735339e03813d1e1c76a97655c818b475c03503

See more details on using hashes here.

File details

Details for the file superlinked-12.9.0-py3-none-any.whl.

File metadata

  • Download URL: superlinked-12.9.0-py3-none-any.whl
  • Upload date:
  • Size: 411.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for superlinked-12.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6124d0a69af6bde0b59b1fc5f0f9d94d19700a9bfa0d5d570003a02e20a830d4
MD5 51e84a91fa387bb06b44bf8b82072aa6
BLAKE2b-256 6756241615bbf98182041fe9d7f1f8027cf23d004d28e59193450feb5c63651f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page