No project description provided
Project description
DotProduct
Official DotProduct Python SDK
This is the official Python SDK for the DotProduct platform. Use this SDK to connect your backend applications, interactive notebooks, or command-line tools to DotProduct's platform.
What is DotProduct?
DotProduct lets you create and deploy custom RAG pipelines on state-of-the-art infrastructure, without the faff. Just create a context and add files to it. You can then search your context using vector search, hybrid search, etc. DotProduct takes care of all the infrastructure details behind the scenes (SSL, DNS, Kubernetes clusters, embedding models, GPUs, auto-scaling, load balancing, etc).
Quick Start
Install the package with pip or uv:
uv add dotproduct-rag
from dotproduct-rag import DotProduct
# if api_key is omitted, DOTPRODUCT_API_KEY env variable is used
dp = DotProduct(api_key="<DOTPRODUCT_API_KEY>")
You can get an api key here.
Note you may need to set the OPENAI_API_KEY on the settings page if using OPENAI embeddigns.
Create a Context
A Context is where you store your data.
To create a context:
context = dp.create_context(name="my_context")
To instantiate a local object referencing an existing context:
context = dp.Context(name="my_context")
Upload files
You can add individual files
context.upload_files(['path_to_file_1.pdf', 'path_to_file_2.pdf'], max_chunk_size=400)
To upload strings directly use the upload_texts method
context.upload_texts(['Hi Bob!', 'Oh hey Mark!'])
You can also add a full directory of files
context.upload_from_directory("./path_to_your_directory")
You can add metadata to files at upload time
file_paths = ['path_to_file_1.pdf', 'path_to_file_2.pdf']
files_metadata = [{"tag": "file_1_tag"}, {"tag": "file_2_tag"}]
context.upload_files(file_paths,metadata=files_metadata)
You can use this metadata to filter searches against your context. For more details, see the DotProduct Structured Query Language section.
List files available in a context
files = context.list_files()
for file in files:
print(file)
You can also generate a download link for a file using the file_id:
import requests
files = context.list_files()
file = files[0]
file_url = context.get_download_url(file_id=file.id)
response = requests.get(file_url)
path = f"./local_folder/{file.name}"
with open(path, "wb") as f:
f.write(response.content)
Note: Download urls will be valid for 5 minutes after they have been generated.
List the chunks available in a context
chunks = context.list_chunks(limit=50)
List all the contexts
all_contexts = dp.list_contexts()
These lines of code will provide a list of all contexts available to you.
Deleting contexts
If you wish to delete any context, you can do so with:
dp.delete_context("my_context")
Search through your Context
The following piece of code will execute a query and search across all chunks present in the context which have matchin metadata:
context = dp.Context("my_context")
chunks = context.search(
query="query_string_to_search",
semantic_weight=0.7,
full_text_weight=0.3,
top_k=5,
rrf_k=50,
metadata_filters = {"tag" : {"$eq" : "file_1_tag"}}
)
More details on the arguments for this method:
query: Query string that will be embedded used for the search.top_k: The maximum number of "chunks" that will be retrieved.semantic_weight: A value representing the weight of the relevance of the semantic similarity of the data in your context.full_text_weight: A weight value for the relevance of the actual words in the context using key word search.rrfK: quite a technical parameter which determines how we merge the scores for semantic, and fullText weights. For more see heremetadata_filters: A dictionary of criteria used to filter results based on metadata. See the DotProduct Structured Query Language section below for the syntax details
Extract Structured Output from your Context
You can get structured output directly from your context by providing a json schema
or a pydantic (v2) BaseModel:
from pydantic import BaseModel, Field
class RockBandInfo(BaseModel):
title: str = Field(description="a title of a 1970s rockband")
lyrics: str = Field(description="lyrics to their absolute banger of a song")
context = dp.Context("my_context")
output_dict, chunks = context.extract_from_search(
query="tell me about rockbands",
schema=RockBandInfo, # you can pass a pydantic (v2) model or a json schema dict
extraction_prompt="Output only JSON matching the provided schema about the rockbands. If the text does not contain such information use **NO_DATA** as the default"",
)
rock_band = RockBandInfo.model_validate(output_dict)
extract_from_search works just like search but returns the structured output
as a dictionary as well as the reference chunks.
Note that pydantic is not a dependency of dotproduct; you an pass a json schema
definition directly to the extract methods instead of a BaseModel.
You can also extract structured output directly from chunks without performing as search:
output_dict, chunks = context.extract_from_chunks(
schema=RockBandInfo, # you can pass a pydantic model or a json schema dict
extraction_prompt="Output only JSON matching the provided schema about the rockbands. If the text does not contain such information use **NO_DATA** as the default",
metadata_filters = {"tag" : {"$eq" : "rockband"}}
)
rock_band = RockBandInfo.model_validate(output_dict)
In addtion both of these methods take model parameters to be used for structured output extraction (be sure to set your Anthropic API key on the settings ):
output_dict, chunks = context.extract_from_chunks(
schema=RockBandInfo, # you can pass a pydantic model or a json schema dict
extraction_prompt="Output only JSON matching the provided schema about the rockbands. If the text does not contain such information use **NO_DATA** as the default",
metadata_filters = {"tag" : {"$eq" : "rockband"}}
model="claude-35"
)
rock_band = RockBandInfo.model_validate(output_dict)
DotProduct Structured Query Language
DotProduct allows you to use a custom "Structured Query Language" to filter the chunks in your context.
The syntax is quite similar to what you might find in no-SQL databases like MongoDB, even though it operates on a SQL database at its core.
The syntax is based around the application of operators. There are two
levels of operators. You can interpret the two levels as "aggregators" and
"comparators".
Aggregators
The aggregator operators you can use are:
| Key | Value Description |
|---|---|
$and |
Returns True i.f.f. all of the conditions in this block return True. |
$or |
Returns True if any of the conditions in this block return True. |
Comparators
The comparator operators you can use are:
| Key | Value Description | Supplied Value Type | Returned Value Type |
| ----------- | ---------------------------------------------------------------------------------- | ------------------- | ------------------- | ------- | ------------- | --- | ------- |
| $eq | Returns True if the value returned from the DB is equal to the supplied value. | string | int | float | string | int | float |
| $neq | Returns True if the value returned from the DB is not equal to the supplied value. | string | int | float | string | int | float |
| $gt | Returns True if the value returned from the DB is greater than the supplied value. | int | float | int | float |
| $lt | Returns True if the value returned from the DB is less than the supplied value. | int | float | int | float |
| $in | Returns True if the value returned from the DB is contained by the supplied array. | array<string> | int | float>|string | int | float | |$contains| Returns True if the array value returned from the DB contains the supplied value. |string | int | float |array<string | int | float>` |
Putting it all together
Using the above building blocks, it's pretty simple to put together quite an advanced composite filter across your embeddings at runtime.
For example in Python you could define some metadata filters like the below:
metadata_filters = { "$and": [
{"$or": [
{"department": {"$eq":"accounts"}},
{"department": { "$in": ["finance", "compliance"]}}
]},
{"tag": {"$eq": "test"}},
{"my_score": {"$gt" : 0.5}},
{"my_other_score" : {"$gt" :0.4}}
]}
context = dp.Context("my_context")
chunks = context.search(
query="query_string_to_search",
metadata_filters = metadata_filters,
)
License
dotproduct-rag is distributed under the terms of the MIT license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dprdct-0.0.1.tar.gz.
File metadata
- Download URL: dprdct-0.0.1.tar.gz
- Upload date:
- Size: 65.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
be2eafa3b8a7d5610ec34dc552fc473476861748eee55bf86b6bfd2a1aa6c155
|
|
| MD5 |
1d3164378d3285e6088bded9257d44d1
|
|
| BLAKE2b-256 |
b755a4f258cc311456c44e3f6c34672427318bc2a9a3b3ba6c54958875bae061
|
File details
Details for the file dprdct-0.0.1-py3-none-any.whl.
File metadata
- Download URL: dprdct-0.0.1-py3-none-any.whl
- Upload date:
- Size: 15.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
41738c7dba87df90186f6c516059e6f194b2c19bed368b542e58b220069e87b1
|
|
| MD5 |
30b94d9c7c639f9e7c97b25dc0c8da39
|
|
| BLAKE2b-256 |
87fd37eafa118942ad0a082dfb4eb276fb9148cbfc3f4ea375a50363d32ba1bd
|