VectorVault: Simplified vector database management in the cloud for machine learning and generative ai workflows
Project description
Vector Vault is a vector database cloud service built to make generative ai chat quick and easy. It allows you to seamlessly vectorize data and access it from the cloud. It's scalable to both small projects and large applications with millions of users. Vector Vault has been designed with a user-friendly code interface to make the process of working with vector search easy and let you focus on what matters, results. Vector Vault ensures secure and isolated data handling and enables you to create and interact vector databases - aka "vaults" - in the cloud with under one second response times.
The vectorvault
package comes with extensive chat functionality, so that you don't have to think about the details and can make smooth chat applications with ease. Speaking of smooth chat experiences, vectorvault
also comes with chat streaming built-in. Simply use the get_chat_stream()
function that works just like the regular get_chat()
but with streaming.
langchain
is a popular package for building functionality with llms, but when it comes to referencing vector databases, the database retrieval and chat integration can be complicated and difficult. This is one of the reasons we built vectorvault
. We've integrated all the chat options people like to use with langchain
, but made them all easier and more straight forward to use. Now with Vector Vault, integrating vector database results into generative chat applications is not only easy, it's the default. You also have total control over every aspect with parameters. If you have been looking for an easy and reliable way to use vector databases with ChatGPT, then Vector Vault is for you.
By combining vector similarity search with generative ai chat, new possibilities for conversation and communication emerge. For example, product information can be added to a Vault, and when a customer asks a product question, the right product information can be instantly retreived and seamlessly used in conversation by chatgpt for an accurate response. This capability allows for informed conversation and the possibilites range from ai automated customer support, to new ways to get news, to ai code reviews that reference source documentation, to ai domain experts for specific knowledge sets, and much more. You will need an api key in order to access the Vault Cloud. If you don't already have one, you can sign up at VectorVault.io
The vectorvault
package allows you to interact with your Vault Cloud using its Python-based API. Each vault is a seperate vector database. vectorvault
includes operations such as creating a vault, deleting a vault, preparing data to add, getting vector embeddings for prepared data using OpenAI's text-embedding-ada-002, saving the data and embeddings to the cloud, referencing cloud vault data via vector search and retrieval, interacting with OpenAI's ChatGPT model to get responses, managing conversation history, and retrieving contextualized responses with reference vault data as context.
Basic Interactions:
add()
: Prepares data to be added to the Vault, with automatic text splitting and processing for long texts.
get_vectors()
: Retrieves vectors embeddings for all prepared data
save()
: Saves the data with embeddings to the Vault (cloud), along with any metadata
delete()
: Deletes the current Vault and all contents
get_vaults()
: Retrieves a list of Vaults within the current Vault directory
get_similar()
: Retrieves similar texts from the Vault for a given input text - We process vectors in the cloud
get_similar_local()
: Retrieves similar texts from the Vault for a given input text - You process vectors locally
get_chat()
: Retrieves a response from ChatGPT, with support for handling conversation history, summarizing responses, and retrieving context-based responses by referencing similar data in the vault
get_vectors()
utilizes openai embeddings api and internally batches vector embeddings with OpenAI's text-embeddings-ada-002 model, and comes with auto rate-limiting and concurrent requests for maximum processing speed
Access The Vault:
Install Vector Vault:
pip install vector-vault
Get Your Vector Vault API Key:
from vectorvault import register
register(first_name='John', last_name='Smith', email='john@smith.com', password='make_a_password')
The api key will be sent to your email.
Build The Vault:
Set your openai key as an envorionment variable
os.environ['OPENAI_API_KEY'] = 'your_openai_api_key'
- Create a Vault instance - (new vault will be created if name does not exist)
- Gather some text data we want to store
- Add the data to the Vault
- Get vectors embeddings
- Save to the cloud vault
from vectorvault import Vault
vault = Vault(user='your@email.com', api_key='your_api_key', vault='name_of_vault)
text_data = 'some data'
vault.add(text_data)
vault.get_vectors()
vault.save()
Now that you have saved some data to the vault, you can add more at anytime. vault.add()
is very versitile. You can add any length of text, even a full book...and it will be all automatically split and processed. vault.get_vectors()
is also extremely flexible, because you can vault.add()
as much as you want, then when you're done, process all the vectors at once with a vault.get_vectors()
call - Which internally batches vector embeddings with OpenAI's text-embeddings-ada-002, and comes with auto rate-limiting and concurrent requests for maximum processing speed.
vault.add(very_large_text)
vault.get_vectors()
vault.save()
^ these three lines execute fast and can be called as often as you like. For example: you can use add()
, get_vectors()
, and save()
mid conversation to save every message to the vault as soon as they comes in. Small loads are usually finished in less than a second. Large loads depend on total data size.
A test was done adding the full text of 37 books at once. The
get_vectors()
function took 8 minutes and 56 seconds. (For comparison, processing one at a time via openai's embedding function would take roughly two days)
Use The Vault:
You can create a javascript or HTML post to "https://api.vectorvault.io/get_similar"
, to run front end apps.
Since your Vault lives in the cloud, making a call to it is really easy. You can even do it with a curl
from command line:
curl -X POST "https://api.vectorvault.io/get_similar" \
-H "Content-Type: application/json" \
-d '{
"user": "your_username",
"api_key": "your_api_key",
"vault": "your_vault_name",
"text": "Your text input"
}'
{"results":[{"data":"NASA Mars Exploration... shortend for brevity","metadata":{"created_at":"2023-05-29T19:21:20.846023","item_id":0,"name":"webdump-0","updated_at":"2023-05-29T19:21:20.846028"}}]}
This is the same exact call, but in Python:
similar_data = vault.get_similar("Your text input")
for result in similar_data:
print(result['data'])
NASA Mars Exploration... NASA To Host Briefing... Program studies Mars... A Look at a Steep North Polar...
^ this prints each similar item that was retieved. The get_similar()
function retrieves items from the vault using vector cosine similarity search algorithm to find results. Default returns a list with 4 results.
similar_data = vault.get_similar(text_input, n = 10)
returns 10 results instead of 4.
Print the metadata:
similar_data = vault.get_similar("Your text input")
for result in similar_data:
print(result['data'])
print(result['metadata'])
NASA Mars Exploration... {"created_at":"2023-05-29T19...} NASA To Host Briefing... {"created_at":"2023-05-29T19...} Program studies Mars... {"created_at":"2023-05-29T19...} A Look at a Steep North Polar... {"created_at":"2023-05-29T19...}
Use get_chat()
with get_context=True
to get response from chatgpt referencing vault data
Retrieving items from the vault, is useful when using it supply context to a large language model, like chatgpt for instance, to get a contextualized response. The follow example searches the vault for 4 similar results and then give those to chatgpt as context, asking chatgpt answer the question using the vault data.
question = "Should I use Vector Vault for my next generative ai application"
answer = vault.get_chat(question, get_context=True)
print(answer)
The following line will send chatgpt the question for response and not interact with the vault in any way
answer = vault.get_chat(question)
ChatGPT
Use ChatGPT with get_chat()
Get chat response from OpenAI's ChatGPT. Rate limiting, auto retries, and chat histroy slicing auto-built-in so you can create complex chat capability without getting complicated. Enter your text, optionally add chat history, and optionally choose a summary response instead (default: summmary=False)
-
Example Signle Usage:
response = vault.get_chat(text)
-
Example Chat:
response = vault.get_chat(text, chat_history)
-
Example Summary:
summary = vault.get_chat(text, summary=True)
-
Example Context-Based Response:
response = vault.get_chat(text, get_context=True)
-
Example Context-Based Response w/ Chat History:
response = vault.get_chat(text, chat_history, get_context=True)
-
Example Context-Response with Context Samples Returned:
vault_response = vault.get_chat(text, get_context=True, return_context=True)
Response from ChatGPT in string format, unless return_context=True
is passed, then response will be a dictionary containing the results - response from ChatGPT, and the vault data.
# print response:
print(vault_response['response'])
# print context:
for item in vault_response['context']:
print("\n\n", f"item {item['metadata']['name']}")
print(item['data'])
Summarize Anything:
You can summarize any text, no matter how large - even an entire book all at once. Long texts are split into the largest possible chunk sizes and a summary is generated for each chunk. When all summaries are finished, they are concatenated and returned as one.
summary = vault.get_chat(text, summary=True)
want to make a summary of a certain length?...
summary = vault.get_chat(text, summary=True)
while len(summary) > 1000:
summary = vault.get_chat(summary, summary=True)
^ in the above example, we make a summary, then we enter while loop that continues until the summary recieved back is a certain lenght. You could use this to summarize a 1000 page book to less than 1000 characters of text.
Streaming:
Use the built-in streaming functionality to get interactive chat streaming. Here's an app we built to showcase what you can do with Vector Vault:
get_chat_stream():
Example Usage: vault.print_stream(vault.get_chat_stream(text))
Always use this get_chat_stream()
wrapped by either print_stream()
or cloud_stream()
.
cloud_stream()
is for cloud functions, like a flask app serving a front end elsewhere.
print_stream()
is for local console printing.
Example Signle Usage:
response = vault.print_stream(vault.get_chat_stream(text))
Example Chat:
response = vault.print_stream(vault.get_chat_stream(text, chat_history))
Example Summary:
summary = vault.print_stream(vault.get_chat_stream(text, summary=True))
Example Context-Based Response:
response = vault.print_stream(vault.get_chat_stream(text, get_context = True))
Example Context-Based Response w/ Chat History:
response = vault.print_stream(vault.get_chat_stream(text, chat_history, get_context = True))
Example Context-Response with Context Samples Returned:
vault_response = vault.print_stream(vault.get_chat_stream(text, get_context = True, return_context = True))
Example Context-Response with SPECIFIC META TAGS for Context Samples Returned:
vault_response = vault.print_stream(vault.get_chat_stream(text, get_context = True, return_context = True, include_context_meta=True, metatag=['title', 'author']))
Example Context-Response with SPECIFIC META TAGS for Context Samples Returned & Specific Meta Prefixes and Suffixes:
vault_response = vault.print_stream(vault.get_chat_stream(text, get_context = True, return_context = True, include_context_meta=True, metatag=['title', 'author'], metatag_prefixes=['\n\n Title: ', '\nAuthor: '], metatag_suffixes=['', '\n']))
Response is a always a stream
vault.get_chat_stream
will start a chat stream. The input parameters are mostly like the regular get_chat functionality, and the capabilites are all the same. The only difference is that the get_chat function returns the whole reply message at once. The get_chat_stream yield
s each word as it it received. This means that using get_chat_stream()
is very different than using get_chat()
. Here's an example that prints the same message to show their difference:
## get_chat()
print(vault.get_chat(text, history))
## get_chat_stream()
for word in vault.get_chat_stream(text, history):
print(word)
This will take each word yielded and print it as it comes in. However, it's best to use the built in print function print_stream
.
vault.print_stream(vault.get_chat_stream(text, history))
Because streaming is a key functionality for end user applications, we also have a cloud_stream
function to make cloud streaming to your front end app easy. In a flask app, your return would look like: return Response(vault.cloud_stream(vault.get_chat_stream(text, history, get_context=True)), mimetype='text/event-stream')
This makes going live with highly functional cloud apps really easy. Now you can build impressive applications in record time! If have any questions, message in Discord.
Build an AI Cusomter Service Chat Bot
In the following code, we will add all of a company's past support conversations to a cloud Vault. (We load the company support texts from a .txt file, vectorize them, then add them to the Vault). As new people message in, we will vector search the Vault for similar questions and answers. We take the past answers returned from the Vault and instruct ChatGPT to use those previous answers to answer this new question. (NOTE: This will also work based on a customer FAQ, or customer support response templates).
Create the Customer Service Vault
from vectorvault import Vault
os.environ['OPENAI_API_KEY'] = 'your_openai_api_key'
vault = Vault(user='your_user_id', api_key='your_api_key', vault='Customer Service')
with open('customer_service.txt', 'r') as f:
vault.add(f.read())
vault.get_vectors()
vault.save()
And just like that, in a only a few lines of code we created a customer service vault. Now whenever you want to use it in production, just connect to that vault, and use the get_chat()
with get_context=True
. When you call get_chat(text, get_context=True)
it will take the customer's question, search the vault to find the most similar questions and answers, then have ChatGPT reply to the customer using that information.
question = 'customer question'
answer = vault.get_chat(question, get_context=True)
That's all it takes to create an AI customer service chatbot that responds as well as any support rep!
Getting Started:
Open the examples folder and try out the Google Colab tutorials we have! They will show you a lot, plus they are in Google Colab, so no local set up required, just open them up and press play.
Contact:
If have any questions, drop a message in the Vector Vault Discord channel, happy to help.
Happy coding!
FAQ
What is the latency on large datasets?
37 full length book texts with vectors make up ~250MB of storage with around 10,000 - 15,000 items of 1000+ characters for each item. This example of 37 books is considered small. Personal plans come with 1GB of storage, so this doesn't even come close to plan limit. Calling similar items from this vault is one second response time - with vectors retreived, vectors searched, then similar items returned. Running locally, you can expect ~0.7 seconds. Just referencing the cloud, you can expect ~1 second. This example is about the same amount of data as the entire customer support history for any given company. So if you build a typical customer service chatbot, your vault size will be considered small. You can try our app at the bottom of our website to see the latency for yourself. Keep in mind that if you do so, you will be seeing the latency from chatgpt on top of the vault time, but it's still so fast that its not noticible during a conversation. If you had 10 times that much data, api latency may be around 2 seconds max, so still fast enough for realtime conversations. Or you can just get an Enterprise Cloud Plan from us and get it even faster.
How should I segment my data?
Vaults within vaults is the optimal structure for segmenting data. If a vault grows too large, just make multiple child vaults within the current vault directory, and store the data there. If your 'Science' vault grows too large, split it into multiple child vaults, like 'Science/Chemistry', etc - this accesses a "Chemistry" vault within the Science vault. Now you can fine grain datasets, where every child vault contains more specific subject information than the parent vault. This segmenting structure allows you to focus data on large data sets.
What if I'm a large company with very large data
If you need to store more than 1 gig of data in single vaults for any reason, let us know and we can set you up with Enterprise Cloud Plan. In our Enterprise plan, we create a persistent storage pod with as much memory as you need. It is always active and scalable to terabytes. With an Enterprise plan, a billion vectors search will respond in one second. For reference, the full text of 3.7 million books would be ~1.1 billion vectors, and take up ~8 terabytes of storage. If this is what you're looking for, just reach out to us by email at support at vectorvault.io.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for vector_vault-2.0.6-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6bdf619f5c8dd1cc33cb42deb2a8f8a7dc82c54694bb20a70fd18f28559f3a60 |
|
MD5 | 2e117aeb1b891b129f3ac4bda0d7ca04 |
|
BLAKE2b-256 | 63d6f7517f33e1c024e25a08748560496ede708a9fca5e4fd5a8c00d400ee9d4 |