Turn unstructured data into vectors
Project description
Radient
Radient is a developer-friendly, lightweight library for vectorization, i.e. turning data into embeddings. Radient supports simple vectorization as well as complex vector-centric workflows.
$ pip install radient
Why Radient?
In applications that leverage RAG, vector databases are commonly used as a way to retrieve relevant content that is relevant to the query. It's become so popular that "traditional" database vendors are rushing to support vector search. (Anybody see those funky Singlestore ads on US-101?)
Although still predominantly used for text today, vectors will be used extensively across a variety of different modalities in the upcoming months. This evolution is being powered by two independent occurrences: 1) the shift from large language models to large multimodal models (such as GPT-4o, Reka, and Fuyu), and 2) the rise in adoption for "traditional" tasks such as recommendation and semantic search. In short, vectors are going mainstream, and we need a way to vectorize everything, not just text.
If you find this project helpful or interesting, please consider giving it a star. :star:
Getting started
Basic vectorization can be performed as follows:
from radient import text_vectorizer
vz = text_vectorizer()
vz.vectorize("Hello, world!") # Vector([-3.21440510e-02, -5.10351397e-02, 3.69579718e-02, ...])
The above snippet vectorizes the string "Hello, world!"
using a default model, namely bge-small-en-v1.5
from sentence-transformers
. If your Python environment does not contain the sentence-transformers
library, Radient will prompt you for it:
vz = text_vectorizer() # Vectorizer requires sentence-transformers. Install? [Y/n]
You can type "Y" to have Radient install it for you automatically.
Each vectorizer can take a method
parameter along with optional keyword arguments which get passed directly to the underlying vectorization library. For example, we can pick Mixbread AI's mxbai-embed-large-v1
model using the sentence-transformers
library via:
vz_mbai = text_vectorizer(method="sentence-transformers", model_name_or_path="mixedbread-ai/mxbai-embed-large-v1")
vz_mbai.vectorize("Hello, world!") # Vector([ 0.01729078, 0.04468533, 0.00055427, ...])
More than just text
With Radient, you're not limited to text. Audio, graphs, images, and molecules can be vectorized as well:
from radient import (
audio_vectorizer,
graph_vectorizer,
image_vectorizer,
molecule_vectorizer,
)
avec = audio_vectorizer().vectorize(str(Path.home() / "audio.wav"))
gvec = graph_vectorizer().vectorize(nx.karate_club_graph())
ivec = image_vectorizer().vectorize(str(Path.home() / "image.jpg"))
mvec = molecule_vectorizer().vectorize("O=C=O")
A partial list of methods and optional kwargs supported by each modality can be found here.
For production use cases with large quantities of data, performance is key. Radient also provides an accelerate
function to optimize vectorizers on-the-fly:
import numpy as np
vz = text_vectorizer()
vec0 = vz.vectorize("Hello, world!")
vz.accelerate()
vec1 = vz.vectorize("Hello, world!")
np.allclose(vec0, vec1) # True
On a 2.3 GHz Quad-Core Intel Core i7, the original vectorizer returns in ~32ms, while the accelerated vectorizer returns in ~17ms.
Building unstructured data ETL
Aside from running experiments, pure vectorization is not particularly useful. Mirroring strutured data ETL pipelines, unstructured data ETL workloads often require a combination of four components:
- A data source (or a data reader) for extracting unstructured data,
- One or more transform modules such as video demuxing or OCR,
- A vectorizer or set of vectorizers for turning the data into embeddings, and
- A place to store the vectors once they have been computed.
Radient provides a Workflow
object specifically for building vector-centric ETL applications. With Workflows, you can combine any number of each of these components into a directed graph. For example, a workflow to continuously read text documents from Google Drive and vectorize them into Milvus might look like:
from radient import make_operator
from radient import Workflow
extract = make_operator(optype="source", method="google-drive", task_params={"folder": "My Files"})
transform = make_operator(optype="transform", method="read-text", task_params={})
vectorize = make_operator(optype="vectorizer", method="voyage-ai", modality="text", task_params={})
load = make_operator(optype="sink", method="milvus", task_params={"operation": "insert"})
wf = (
Workflow()
.add(extract, name="extract")
.add(transform, name="transform")
.add(vectorize, name="vectorize")
.add(load, name="load")
)
You can use accelerated vectorizers and transforms in a Workflow by specifying accelerate=True
for all supported tasks.
Supported libraries
Radient builds atop work from the broader ML community. Most vectorizers come from other libraries:
On-the-fly model acceleartion is done via ONNX.
A massive thank you to all the creators and maintainers of these libraries.
Coming soon™
A couple of features slated for the near-term (hopefully):
- Sparse vector, binary vector, and multi-vector support
- Support for all relevant embedding models on Huggingface
LLM connectors will not be a feature that Radient provides. Building context-aware systems around LLMs is a complex task, and not one that Radient intends to solve. Projects such as Haystack and Llamaindex are two of the many great options to consider if you're looking to extract maximum RAG performance.
Full write-up on Radient will come later, along with more sample applications, so stay tuned.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file radient-2024.7.6.tar.gz
.
File metadata
- Download URL: radient-2024.7.6.tar.gz
- Upload date:
- Size: 27.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.9.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c8aeafbc4a0e2d3dd694cce07f70d65c6ba9bde5e8b0acc50937f7e6320c19b4 |
|
MD5 | b69b090a2037eab6c5cd08a3dee5d95d |
|
BLAKE2b-256 | c2dd9c6aa6596fc99b5e5dd5d025c3e30ed2371263ef76708b794019b76153a5 |
File details
Details for the file radient-2024.7.6-py3-none-any.whl
.
File metadata
- Download URL: radient-2024.7.6-py3-none-any.whl
- Upload date:
- Size: 40.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.9.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6bf5e5731224da8c966d7e6cb53436e08938b3da1d216f30e88ad082686ca8c5 |
|
MD5 | 64d1802ac2a53069e0bd36f881fb80e1 |
|
BLAKE2b-256 | 21a39f2edf06a2dfb3bfc9b0569dddbd608de4cc9bcbb4a3578cb6bd4f5e5c27 |