Quickly create ChatGPT RAG apps and Unleash the full potential of generative AI with Vector Vault
Project description
Vector Vault is a cloud-native vector database combined with OpenAI. Easily call ChatGPT or GPT4 and customize how they respond. Take any text data, vectorize it, and add it to the cloud vector database in 3 lines of code. Vector Vault enables you to quickly and securely create and interact with your vector databases - aka "Vaults". Vaults are hosted on serverless distributed cloud architecture backed by Google, making vectorvault
scalable to any project size.
vectorvault
takes inspiration from LangChain, integrating their most popular chat features and LLM tools. However, by combining vector databases with OpenAI's chat into one single package, vectorvault
is able to hide most of the complexity, making it simple to build custom chat experiences. It's even easier to use ChatGPT with the vectorvault
package than OpenAI's default package, and you can customize what ChatGPT says by adding the kind of things you want it to say to the Vault.
See tutorials in the Examples folder. You don't need a Vector Vault API key to use the tools or chat features, but you will need one to access the Vault Cloud and create/use vector databases. If you don't already have one, you can sign up for free at VectorVault.io. While the service is paid at production scale, the first tier is free allowing you to develop quickly. Affordability is the best reason to use Vector Vault. Due to our serverless cloud architecture, you are able to create an unlimited number of isolated vetor databases, while only paying for the number of references you make to them.
Full Python API:
pip install vector-vault
: install
from vectorvault import Vault
: import
vault = Vault( user='your_eamil', api_key='your_api_key', openai_key='your_openai_api_key')
Create Vault Instance and Connect to OpenAI. (Also call verbose=True
to print all communications and notifications to the terminal while building)
vault.add(text, meta=None, name='', split=False, split_size=1000)
: Loads data to be added to the Vault, with automatic text splitting for long texts. text
is a text string. meta
is a dictionary. (split=True
will split your text input, based on your split_size
. For each split, a new item will be created. name
parameter is a shortcut to adding a "name" field to the metadata. If you don't add a name or any metadata, generic info will be added for you. text
is the only required input)
vault.get_vectors()
: Retrieves vectors embeddings for all loaded data. (No parameters)
vault.save()
: Saves all loaded data with embeddings to the Vault (cloud), along with any metadata. (No parameters)
vault.delete()
: Deletes the current Vault and all contents. (No parameters)
vault.get_vaults()
: Retrieves a list of Vaults within the current Vault directory. (No parameters)
vault.get_similar(text, n)
: Retrieves similar texts from the Vault for a given input text - Processes vectors in the cloud. text
is required. n
is optional, default = 4.
vault.get_similar_local(text, n)
: Retrieves similar texts from the Vault for a given input text - Processes vectors locally. text
is required. n
is optional, default = 4. Local version for speed optimized local production.
vault.get_total_items()
: Returns the total number of items in the Vault
vault.clear_cache()
: Clears the cache for all the loaded items - add()
loads an item
vault.get_items_by_vector(vector, n)
: Returns vector similar items. Requires input vector, returns similar items. n
is number of items you want returned, default = 4
vault.get_distance(id1, id2)
: For getting the vector distance between two items id1
and id2
in the Vault.
Items can be retrieved from the Vault with a nearest neighbor search using get_similar()
and the item_ids can be found in the metadata. Item_ids are numeric and sequential, so accessing all items in the Vault can be done by iterating from beginning to end - e.g. for i in range vault.get_total_items():
vault.get_item_vector(id)
: returns the vector for item id
in the Vault.
vault.get_items(ids)
: returns a list containing your item(s). ids
is a list of ids, one or many
vault.cloud_stream(function)
: For cloud application yielding the chat stream, like a flask app. Called like vault.cloud_stream(vault.get_chat_stream('some_text'))
in the return of a flask app.
vault.print_stream(function)
: For locally printing the chat stream. Called like vault.print_stream(vault.get_chat_stream('some_text'))
. You can also assign a variable to it like reply = vault.print_stream()
It still streams to the console, but the final complete text will also be available in the reply
variable.
vault.get_chat()
: Retrieves a response from ChatGPT, with parameters for handling conversation history, summarizing responses, and retrieving context-based responses that reference similar data in the vault. (See dedicated section below on using this function and its' parameters)
vault.get_chat_stream()
: Retrieves a response from ChatGPT in stream format, with parameters for handling conversation history, summarizing responses, and retrieving context-based responses that reference similar data in the Vault. (See dedicated section below on using this function and its' parameters)
Install:
Install Vector Vault:
pip install vector-vault
Upload:
- Create a Vault instance
- Gather some text data we want to store
- Add the data to the Vault
- Get vectors embeddings
- Save to the Cloud
from vectorvault import Vault
vault = Vault(user='YOUR_EMAIL',
api_key='YOUR_API_KEY',
openai_key='YOUR_OPENAI_KEY',
vault='NAME_OF_VAULT')
# a new vault will be created if the 'vault' name does not already exist
# if name already exists, you will be connected to the existing vault
vault.add('some text')
vault.get_vectors()
vault.save()
That's all it takes to save your data to a cloud vector database. Now you can quickly search the database or use it as context for ChatGPT aka RAG (Retrieval Augmented Generation), like so:
question = "Should I use Vector Vault for my next generative ai application?"
answer = vault.get_chat(question, get_context=True)
print(answer)
Vector Vault simplifies the process of creating generative AI, making it a compelling choice for your next project involving generative AI. It's essential to consider your specific use cases and the technologies you're currently utilizing. Nevertheless, Vector Vault's seamless integration into various workflows and its ability to operate in a cloud-based environment make it an ideal solution for incorporating generative AI into any application. To achieve this, you can simply input your text into your Vector Vault implementation and retrieve the generated response. Additionally, you have the option to access the Vector Vault API directly from a JavaScript front-end interface, eliminating the need for setting up your own backend implementation. With these advantages in mind, Vector Vault is likely to streamline the development of your next generative AI application, making it faster and more straightforward.
vault.add()
is very versitile. You can add any length of text, even a full book...and it will be all automatically split and processed. vault.get_vectors()
is also extremely flexible. You can vault.add()
as much as you want, and then when you're done, process all the vectors at once with a single vault.get_vectors()
call - Which internally batches vector embeddings with OpenAI's text-embeddings-ada-002, and comes with auto rate-limiting and concurrent requests for maximum processing speed.
vault.add(very_large_text)
vault.get_vectors()
vault.save()
# these three lines execute fast and can be called mid-conversation before a reply
Small save loads are usually finished in less than a second. Large loads depend on total data size.
A 2000 page book (e.g. the Bible) would take ~30 seconds. A test was done adding 37 books. The
get_vectors()
function took 8 minutes and 56 seconds. (For comparison, processing via OpenAI's standard embedding function, that you can find in their documentation, would take over two days). This exponentially faster processing time is due to our built in concurrency and internal text uploading methods that are optimized for speed and have built-in rate limiting.
Reference:
In Python:
similar_data = vault.get_similar("Your text input")
for result in similar_data:
print(result['data'])
NASA Mars Exploration... NASA To Host Briefing... Program studies Mars... A Look at a Steep North Polar...
The exact same call, but from command line:
curl -X POST "https://api.vectorvault.io/get_similar" \
-H "Content-Type: application/json" \
-d '{
"user": "your_username",
"api_key": "your_api_key",
"openi_key": "your_openai_api_key",
"vault": "your_vault_name",
"text": "Your text input"
}'
[{"data":"NASA Mars Exploration... (shortend for brevity)","metadata":{"created_at":"2023-05-29T19:21:20.846023","item_id":0,"name":"webdump-0","updated_at":"2023-05-29T19:21:20.846028"}}]
Back to Python, here's how to print the data and metadata together:
for result in similar_data:
print(result['data'])
print(result['metadata'])
NASA Mars Exploration... {"created_at":"2023-05-29T19...} NASA To Host Briefing... {"created_at":"2023-05-29T19...} Program studies Mars... {"created_at":"2023-05-29T19...} A Look at a Steep North Polar... {"created_at":"2023-05-29T19...}
Metadata Made Easy
Metadata is important for knowing where your data came from, when it was made, and anything else you want to know about data you add to the Vault. The Vault is your vector database, and when you add data in it to be searched, the metadata will always come back with every search result. Add anything you want to the metadata and it will be permenantly saved.
# To add metadata to your vault, just include the meta as a parameter in `add()`. Meta is always a dict, and you can add any fields you want.
metadata = {
'name': 'Lifestyle in LA',
'country': 'United States',
'city': 'LA'
}
vault.add(text, meta=metadata)
vault.get_vectors()
vault.save()
# To get any metadata, just put "['metadata']", then the data you want after it, like: "['name']":
similar_data = vault.get_similar("Your text input") # 4 results by default
# printing metadata from first result...
print(similar_data[0]['metadata']['name'])
print(similar_data[0]['metadata']['country'])
print(similar_data[0]['metadata']['city'])
Lifestyle in LA
United States
LA
Add Any Fields:
# Add any fields you want to the metadata:
with open('1984.txt', 'r') as file:
text = file.read()
book_metadata = {
'title': '1984',
'author': 'George Orwell',
'genre': 'Dystopian',
'publication_year': 1949,
'publisher': 'Secker & Warburg',
'ISBN': '978-0451524935',
'language': 'English',
'page_count': 328
}
vault.add(text, meta=book_metadata)
vault.get_vectors()
vault.save()
# Later you can get any of those fields
similar_data = vault.get_similar("How will the government control you in the future?")
# `get_similar` returns 4 results by default
for result in similar_data:
print(result['metadata']['title'])
print(result['metadata']['author'])
print(result['metadata']['genre'])
1984 George Orwell Dystopian 1984 George Orwell Dystopian 1984 George Orwell Dystopian 1984 George Orwell Dystopian
# Results are always returned in a list, so '[0]' pulls the first result
similar_data = vault.get_similar("How will the government control you in the future?")
print(similar_data[0]['metadata']['title'])
print(similar_data[0]['metadata']['author'])
print(similar_data[0]['metadata']['genre'])
1984 George Orwell Dystopian
Change Vaults
# print the list of vaults inside the current vault directory
science_vault = Vault(user='YOUR_EMAIL',
api_key='YOUR_API_KEY',
openai_key='YOUR_OPENAI_KEY',
vault='science')
print(science_vault.get_vaults())
['biology', 'physics', 'chemistry']
Access vaults within vaults with
# biology vault within science vault
biology_vault = Vault(user='YOUR_EMAIL',
api_key='YOUR_API_KEY',
openai_key='YOUR_OPENAI_KEY',
vault='science/biology')
# chemistry vault within science vault
chemistry_vault = Vault(user='YOUR_EMAIL',
api_key='YOUR_API_KEY',
openai_key='YOUR_OPENAI_KEY',
vault='science/chemistry')
# list the vaults within the current directory with `get_vaults`
print(chemistry_vault.get_vaults())
['reactions', 'formulas', 'lab notes']
# lab notes vault, within chemistry vault, within science vault
lab_notes_vault = Vault(user='YOUR_EMAIL',
api_key='YOUR_API_KEY',
openai_key='YOUR_OPENAI_KEY',
vault='science/chemistry/lab notes')
Each vault is a seperate and isolated vector database.
Use get_chat()
with get_context=True
to get response from chatgpt referencing vault data
question = "Should I use Vector Vault for my next generative ai application?"
answer = vault.get_chat(question, get_context=True)
print(answer)
Vector Vault makes building generative ai easy, so you should consider using Vector Vault for your next generative ai project. Additionally, it is important to keep in mind your specific use cases and the other technologies you are working with. However, given the fact that Vector Vault can be integrated in any work flow and be isolated in a cloud environment, it is an ideal package to integrate into any application that you want to utilize generative ai with. To do so, just send the text inputs to your Vector Vault implementation and return the response. With this in mind, it is likely that Vector Vault would make building your next generative ai application both faster and easier.
To integrate vault data in the response, you need to pass get_context=True
# this will get context from the vault, then ask chatgpt the question
answer = vault.get_chat(question, get_context=True)
# this will send to chatgpt only and not interact with the Vault in any way
answer = vault.get_chat(question)
LLM Exclusive Tools (vault.tools
):
• get_rating: -> int
Useful to get a quality rating (always True: answer in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
)
• get_yes_no: -> 'yes' : 'no'
Useful for getting a difinitive answer (always True: answer == 'yes' or answer == 'no'
)
• get_binary: -> 0 : 1
Useful for getting a definitive answer in 0/1 format (always True: answer == 0 or answer == 1
)
• get_match: -> exact match to item in list
Useful to get an exact match to a single option within a set of options -> in: (text and list of answers) -> out: (exact match to one answer in list of answer)
• get_topic: -> 1 to 3 word descriptor
Useful to classify the topic of conversation
• match_or_make: -> exact match to item in list or creates new string to add to list
Get a match to a list of options, or make a new one if unrelated
Useful if you aren't sure if the input will match one of your existing list options, and need flexibility of creating a new one. When starting from an empty list. - will create it from scratch
# Tools example 1:
number_out_of_ten = vault.tools.get_rating('How does LeBron James compare to Michael Jordan?')
print(answer)
8
# Tools example 2:
this_or_that = vault.tools.get_binary('Should I turn right or left?, 0 for right, 1 for left')
print(answer)
0
# Tools example 3:
answer = vault.tools.get_yes_no('Should I use Vector Vault to build my next AI project?')
print(answer)
yes
ChatGPT
Use ChatGPT with get_chat()
Get chat response from OpenAI's ChatGPT. With built-in rate limiting, auto retries, and automatic chat histroy slicing, you can create complex chat capability without getting complicated. All you have to add is the text and Vector Vault takes care of the rest.
The get_chat() function:
get_chat(self, text: str, history: str = None, summary: bool = False, get_context = False, n_context = 4, return_context = False, history_search = False, model='gpt-3.5-turbo', include_context_meta=False, custom_prompt=False)
-
Example Signle Usage:
response = vault.get_chat(text)
-
Example Chat:
response = vault.get_chat(text, chat_history)
-
Example Summary:
summary = vault.get_chat(text, summary=True)
-
Example Context-Based Response:
response = vault.get_chat(text, get_context=True)
-
Example Context-Based Response w/ Chat History:
response = vault.get_chat(text, chat_history, get_context=True)
-
Example Context-Response with Context Samples Returned:
vault_response = vault.get_chat(text, get_context=True, return_context=True)
Response is a string, unless return_context == True, then response will be a dictionary -
Example Custom Prompt:
response = vault.get_chat(text, chat_history, get_context=True, custom_prompt=my_prompt)
custom_prompt
overrides the stock prompt we provide. Check ai.py to see the originals we provide.
llm
and llm_stream
models manage history internally, so the content
is the only variable to be included and formattable in the prompt.
Example with GPT4:
response = vault.get_chat(text, chat_history, get_context=True, model='gpt4)
Getting context from the Vault is usually the goal when customizing text generation, and doing that requires additional prompt variables.
llm_w_context
and llm__w_context_stream
models inject the history, context, and user input all in one prompt. In this case, your custom prompt needs to have history
, context
and question
formattable in the prompt like so:
Example Custom Prompt:
# You can build a custom prompt with custom variables:
my_prompt = """
Use the following information to answer the Question at the end.
Math Result: {math_answer}
Known Variables: {known_vars}
Question: {question}
(Respond to the Question directly. Be the voice of the context, and most importantly: be interesting, engaging, and helpful)
Answer:
"""
response = vault.get_chat(custom_prompt=my_prompt)
A custom prompt makes the get_chat() function flexible for any use case. Check ai.py to see the stock prompt templates, and get a better idea of how they work...or just send me a message in Discord.
Normal Usage:
# connect to the vault you want to use
vault = Vault(user='YOUR_EMAIL',
api_key='YOUR_API_KEY',
openai_key='YOUR_OPENAI_KEY',
vault='philosophy')
# text input
question = "How do you find happiness?"
# get response
answer = vault.get_chat(question, get_context=True)
print(answer)
The answer to finding happiness is not one-size-fits-all, as it can mean different things to different people. However, it has been found that happiness comes from living and working in line with your values and virtues, and finding pleasure in the actions that accord with them. Additionally, having good friends who share your values and provide support and companionship enhances happiness. It is important to remember that happiness cannot be solely dependent on external factors such as material possessions or fleeting pleasures, as they are subject to change and instability. Rather, true happiness may come from an inner sense of mastery and control over yourself and your actions, as well as a sense of purpose and meaning in life.
Summarize Anything:
You can summarize any text, no matter how large - even an entire book all at once. Long texts are split into the largest possible chunk sizes and a summary is generated for each chunk. When all summaries are finished, they are concatenated and returned as one.
# get summary, no matter how large the input text
summary = vault.get_chat(text, summary=True)
Want to make it a certain length?
# make a summary under a legnth of 1000 characters
summary = vault.get_chat(text, summary=True)
while len(summary) > 1000:
summary = vault.get_chat(summary, summary=True)
Streaming:
Use the built-in streaming functionality to get interactive chat streaming. Here's an app we built to showcase what you can do with Vector Vault:
get_chat_stream():
See it in action. Check our examples folder that has Colab notebooks you can be running in the browser seconds from now.
The get_chat()
function returns the whole message at once, whereas the get_chat_stream()
yields each word as it's received. Other than that, they are nearly identical and have nearly the same input parameters. Streaming is a much better experience and the preferred option for front end applications users interact with.
## get_chat()
print(vault.get_chat(text, history))
## get_chat_stream()
for word in vault.get_chat_stream(text, history):
print(word)
# But it's best to use the built in print function: print_stream()
vault.print_stream(vault.get_chat_stream(text, history))
# With print_stream() final answer is returned after streaming completes, so you can make it a variable
answer = vault.print_stream(vault.get_chat_stream(text, history))
The get_chat_stream() function:
get_chat_stream(self, text: str, history: str = None, summary: bool = False, get_context = False, n_context = 4, return_context = False, history_search = False, model='gpt-3.5-turbo', include_context_meta=False, metatag=False, metatag_prefixes=False, metatag_suffixes=False, custom_prompt=False)
Always use this get_chat_stream() wrapped by either print_stream(), or cloud_stream(). cloud_stream() is for cloud functions, like a flask app serving a front end elsewhere. print_stream() is for local console printing
-
Example Signle Usage:
response = vault.print_stream(vault.get_chat_stream(text))
-
Example Chat:
response = vault.print_stream(vault.get_chat_stream(text, chat_history))
-
Example Summary:
summary = vault.print_stream(vault.get_chat_stream(text, summary=True))
-
Example Context-Based Response:
response = vault.print_stream(vault.get_chat_stream(text, get_context = True))
-
Example Context-Based Response w/ Chat History:
response = vault.print_stream(vault.get_chat_stream(text, chat_history, get_context = True))
-
Example Context-Response with Context Samples Returned:
vault_response = vault.print_stream(vault.get_chat_stream(text, get_context = True, return_context = True))
-
Example Custom Prompt:
response = vault.get_chat(text, chat_history, get_context=True, custom_prompt=my_prompt)
custom_prompt
overrides the stock prompt we provide. Check ai.py to see the originals we provide.
llm
and llm_stream
models manage history internally, so the content
is the only variable to be included and formattable in the prompt. Visit the get_chat_stream() function in vault.py for more information on metatags or check out our examples folder streaming tutorial.
Example with GPT4:
response = vault.print_stream(vault.get_chat_stream(text, chat_history, get_context = True, model='gpt4))
Getting context from the Vault is usually the goal when customizing text generation, and doing that requires additional prompt variables.
llm_w_context
and llm__w_context_stream
models inject the history, context, and user input all in one prompt. In this case, your custom prompt needs to have history
, context
and question
formattable in the prompt like so:
Example with Custom Prompt:
my_prompt = """
Use the following Context to answer the Question at the end.
Answer as if you were the modern voice of the context, without referencing the context or mentioning that fact any context has been given. Make sure to not just repeat what is referenced. Don't preface or give any warnings at the end.
Chat History (if any): {history}
Additional Context: {context}
Question: {question}
(Respond to the Question directly. Be the voice of the context, and most importantly: be interesting, engaging, and helpful)
Answer:
"""
response = vault.print_stream(vault.get_chat_stream(text, chat_history, get_context = True, custom_prompt=my_prompt))
Streaming is a key for front end applications, so we also built a cloud_stream
function to make cloud streaming to your front end app easy. In a flask app, all you need to do is recieve the customer text, then call the vault in the return like this:
# Stream from a flask app in one line
return Response(vault.cloud_stream(vault.get_chat_stream(text, history, get_context=True)), mimetype='text/event-stream')
This makes going live with a high level app extremely fast and easy, plus your infrastructure will be scalable and robust. Now you can build impressive applications in record time! If have any questions, message in Discord. Check out our Colab notebooks in the examples folder you can run in the browser right now.
Build an AI Cusomter Service Chat Bot
In the following code, we will add all of a company's past support conversations to a cloud Vault. (We load the company support texts from a .txt file, vectorize them, then add them to the Vault). As new people message in, we will vector search the Vault for similar questions and answers. We take the past answers returned from the Vault and instruct ChatGPT to use those previous answers to answer this new question. (NOTE: This will also work based on a customer FAQ, or customer support response templates).
Create the Customer Service Vault
from vectorvault import Vault
vault = Vault(user='YOUR_EMAIL',
api_key='YOUR_API_KEY',
openai_key='YOUR_OPENAI_KEY',
vault='Customer Service')
with open('customer_service.txt', 'r') as f:
vault.add(f.read())
vault.get_vectors()
vault.save()
And just like that, in a only a few lines of code we created a customer service vault. Now whenever you want to use it in production, just use the get_chat()
with get_context=True
, which will take the customer's question, search the vault to find the most similar questions and answers, then have ChatGPT reply to the customer using that information.
customer_question = "I just bought your XD2000 remote and I'm having trouble syncing it to my tv"
chatbot_answer = vault.get_chat(customer_question, get_context=True)
That's all it takes to create an AI customer service chatbot that responds as well as any human support rep!
Getting Started:
Open the examples folder and try out the Google Colab tutorials we have! They will show you a lot, plus they are in Google Colab, so no local set up required, just open them up and press play.
Contact:
If have any questions, drop a message in the Vector Vault Discord channel, happy to help.
Happy coding!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for vector_vault-3.8.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | b0cd7aed45a7904a72c534ada083305ed47a7c08d8edba703c0570e428d6c7a4 |
|
MD5 | fc224cc06b62655bc9ad2ca0a4093abb |
|
BLAKE2b-256 | 89ee8c325f66c613b2c53d405e35c38a208fd5047dc9a6165e43550dc69c190f |