llama-index readers airbyte_hubspot integration
Project description
Airbyte Hubspot Loader
The Airbyte Hubspot Loader allows you to access different Hubspot objects.
Installation
- Install llama_hub:
pip install llama_hub
- Install the hubspot source:
pip install airbyte-source-hubspot
Usage
Here's an example usage of the AirbyteHubspotReader.
from llama_hub.airbyte_hubspot import AirbyteHubspotReader
hubspot_config = {
# ...
}
reader = AirbyteHubspotReader(config=hubspot_config)
documents = reader.load_data(stream_name="products")
Configuration
Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-hubspot/source_hubspot/spec.yaml.
The general shape looks like this:
{
"start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",
"credentials": {
"credentials_title": "Private App Credentials",
"access_token": "<access token of your private app>",
},
}
By default all fields are stored as metadata in the documents and the text is set to the JSON representation of all the fields. Construct the text of the document by passing a record_handler
to the reader:
def handle_record(record, id):
return Document(
doc_id=id, text=record.data["title"], extra_info=record.data
)
reader = AirbyteHubspotReader(
config=hubspot_config, record_handler=handle_record
)
Lazy loads
The reader.load_data
endpoint will collect all documents and return them as a list. If there are a large number of documents, this can cause issues. By using reader.lazy_load_data
instead, an iterator is returned which can be consumed document by document without the need to keep all documents in memory.
Incremental loads
This loader supports loading data incrementally (only returning documents that weren't loaded last time or got updated in the meantime):
reader = AirbyteHubspotReader(config={...})
documents = reader.load_data(stream_name="products")
current_state = reader.last_state # can be pickled away or stored otherwise
updated_documents = reader.load_data(
stream_name="products", state=current_state
) # only loads documents that were updated since last time
This loader is designed to be used as a way to load data into LlamaIndex and/or subsequently used as a Tool in a LangChain Agent. See here for examples.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for llama_index_readers_airbyte_hubspot-0.1.3.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 35f7330735fbaefdf2e7760b0520e6f42e94db75e1330851292f745e23b69b2d |
|
MD5 | 53c697d7cc69460bd20b62b271609006 |
|
BLAKE2b-256 | 598d5bdb10395b4a69e41dfc37bf071b595d053273877555b0a9c0b36246b678 |
Hashes for llama_index_readers_airbyte_hubspot-0.1.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e9ca5061b3d74e95c6273dbe1e76708a7690bb7ce48bf5601d1acca82cd060fa |
|
MD5 | 9b7ee2f6d937a58f152d4cb14ea766a1 |
|
BLAKE2b-256 | 0d11e2d1659d2a5c5dd711c42153ce3fd143a22db5711a1a6d7d1ec92e491e95 |