Skip to main content

Assistant plugin for Pinecone SDK

Project description

Assistant

Interact with Pinecone's Assistant APIs, e.g. create, manage, and chat with assistants (currently in beta). Pinecone Assistant is also available in the console.

⚠️ Note

Pinecone Assistant is currently in beta and access is limited by a waitlist. If you're interested in trying out Pinecone Assistant, please submit a request via the console.

Quickstart

The following example highlights how to use an assistant to store and understand documents on a particular topic and chat with the assistant about those documents with the ultimate goal of semantically understanding your data.

from pinecone import Pinecone
from pinecone_plugins.assistant.models.chat import Message

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

# Create an assistant (in this case we'll store documents about planets)
space_assistant = pc.assistant.create_assistant(assistant_name="space")

# Upload information to your assistant
space_assistant.upload_file("./space-fun-facts.pdf")

# Once the upload succeeded, ask the assistant a question
msg = Message(content="How old is the earth?")
resp = space_assistant.chat_completions(messages=[msg])
print(resp)

# {'choices': [{'finish_reason': 'stop',
# 'index': 0,
# 'message': {'content': 'The age of the Earth is estimated to be '
#                         'about 4.54 billion years, based on '
#                         'evidence from radiometric age dating of '
#                         'meteorite material and Earth rocks, as '
#                         'well as lunar samples. This estimate has '
#                         'a margin of error of about 1%.',
#             'role': 'assistant'}}],
# 'id': '00000000000000001a377ceeaabf3c18',

Assistants API

Create Assistant

To create an assistant, see the below example. This API creates a assistant with the specified name, metadata, and optional timeout settings.

from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')
metadata = {"author": "Jane Doe", "version": "1.0"}

assistant = pc.assistant.create_assistant(
    assistant_name="example_assistant", 
    metadata=metadata, 
    timeout=30
)

Arguments:

  • assistant_name The name to assign to the assistant.
    • type: str
  • metadata: A dictionary containing metadata for the assistant.
    • type: Optional[dict[str, any]] = None
  • timeout: Specify the number of seconds to wait until assistant operation is completed.
    • If None, wait indefinitely until operation completes
    • If >=0, time out after this many seconds
    • If -1, return immediately and do not wait.
    • type: Optional[int] = None

Returns:

  • AssistantModel object with the following properties:
    • name: Contains the name of the assistant.
    • metadata: Contains the provided metadata.
    • created_at: Contains the timestamp of when the assistant was created.
    • updated_at: Contains the timestamp of when the assistant was last updated.
    • status: Contains the status of the assistant. This is one of:
      • 'Initializing'
      • 'Ready'
      • 'Terminating'
      • 'Failed'

Describe Assistant

The example below describes/fetches an assistant with the specified name. Will raise a 404 if no model exists with the specified name. There are two methods for this:

from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

assistant = pc.assistant.describe_assistant(
    assistant_name="example_assistant", 
)

# we can also do this
assistant = pc.assistant.Assistant(
    assistant_name="example_assistant", 
)

Arguments:

  • assistant_name: The name of the assistant to fetch.
    • type: str, required

Returns:

  • AssistantModel object with the following properties:
    • name: Contains the name of the assistant.
    • metadata: Contains the provided metadata.
    • created_at: Contains the timestamp of when the assistant was created.
    • updated_at: Contains the timestamp of when the assistant was last updated.
    • status: Contains the status of the assistant. This is one of:
      • 'Initializing'
      • 'Ready'
      • 'Terminating'
      • 'Failed'

List Assistants

Lists all assistants created from the current project. Will raise a 404 if no assistant exists with the specified name.

from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

assistants = pc.assistant.list_assistants()

Returns:

  • List[AssistantModel] objects

Delete Assistant

Deletes a assistant with the specified name. Will raise a 404 if no assistant exists with the specified name.

from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

pc.assistant.delete_assistant(
    assistant_name="example_assistant", 
)

Arguments:

  • assistant_name: The name of the assistant to fetch.
    • type: str, required

Returns:

  • NoneType

Assistants Model API

Upload File to Assistant

Uploads a file from the specified path to this assistant for internal processing.

from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

assistant = pc.assistant.Assistant(
    assistant_name="example_assistant", 
)

# upload file
resp = assistant.upload_file(
    file_path="/path/to/file.txt"
    timeout=None
)

Arguments:

  • file_path: The path to the file that needs to be uploaded.

    • type: str, required
  • timeout: Specify the number of seconds to wait until file processing is done.

    • If None, wait indefinitely.
    • If >= 0, time out after this many seconds.
    • If -1, return immediately and do not wait.
    • type: Optional[int] = None

Return

  • FileModel object with the following properties:
    • id: The file id of the uploaded file.
    • name: The name of the uploaded file.
    • created_on: The timestamp of when the file was created.
    • updated_on: The timestamp of the last update to the file.
    • metadata: Metadata associated with the file.
    • status: The status of the file.

Describe File to Assistant

Describes a file with the specified file id from this assistant. Includes information on its status and metadata.

from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

assistant = pc.assistant.Assistant(
    assistant_name="example_assistant", 
)

# describe file
file = assistant.describe_file(file_id="070513b3-022f-4966-b583-a9b12e0290ff")

Arguments:

  • file_id: The file ID of the file to be described.
    • type: str, required

Returns:

  • FileModel object with the following properties:
    • id: The UUID of the requested file.
    • name: The name of the requested file.
    • created_on: The timestamp of when the file was created.
    • updated_on: The timestamp of the last update to the file.
    • metadata: Metadata associated with the file.
    • status: The status of the file.

List Files

Lists all uploaded files in this assistant.

from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

assistant = pc.assistant.Assistant(
    assistant_name="example_assistant", 
)

files = assistant.list_files()

Arguments: None

Returns:

  • List[FileModel], the list of files in the assistant

Delete file from assistant

Deletes a file with the specified file_id from this assistant.

from pinecone import Pinecone

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

assistant = pc.assistant.Assistant(
    assistant_name="example_assistant", 
)

# delete file
assistant.delete_file(file_id="070513b3-022f-4966-b583-a9b12e0290ff")

Arguments:

  • file_id: The file ID of the file to be described.
    • type: str, required

Returns:

  • NoneType

Chat Completions

Performs a chat completion request to the following assistant. If the stream bool is set to true, this function will stream the response in chunks by returning a generator.

from pinecone import Pinecone
from pinecone_plugins.assistant.models.chat import Message

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

space_assistant = pc.assistant.Assistant(assistant_name="space")

msg = Message(content="How old is the earth?")
resp = space_assistant.chat_completions(messages=[msg])

# The stream version
chunks = space_assistant.chat_completions(messages=[msg], stream=True)

for chunk in chunks:
    if chunk:
        print(chunk)

Arguments:

  • messages: The current context for the chat request. The final element in the list represents the user query to be made from this context.

    • type: List[Message] where Message requires the following:
      • role: str, the role of the context ('user' or 'agent')
      • content: str, the content of the context
  • stream: If this flag is turned on, then the return type is an Iterable[StreamingChatResultModel] where data is returned as a generator/stream.

    • type: bool, default false

Return:

  • The default result is a ChatResultModel with the following format:
    • choices: A list with the following structure:
      • finish_reason: The reason the response finished, e.g., "stop".
      • index: The index of the choice in the list.
      • message: An object with the following properties:
        • content: The content of the message.
        • role: The role of the message sender, e.g., "assistant".
      • logprobs: The log probabilities (if applicable), otherwise null.
    • id: The unique identifier of the chat completion.
    • model: The model used for the chat completion, e.g., "gpt-3.5-turbo-0613".

See the example below

{
    "choices": [
        {
            "finish_reason": "stop",
            "index": 0,
            "message": {
                "content": "The 2020 World Series was played in Texas at Globe Life Field in Arlington.",
                "role": "assistant"
            },
            "logprobs": null
        }
    ],
    "id": "00000000000000005c12d4d71263b642",
    "model": "space"
}
  • When stream is set to true, the response is an iterable of StreamingChatResultModel objects with the following properties:
    • choices: A list with the following structure:
      • finish_reason: The reason the response finished, which can be null while streaming.
      • index: The index of the choice in the list.
      • delta: An object with the following properties:
        • content: The incremental content of the message.
        • role: The role of the message sender, which can be empty while streaming.
      • logprobs: The log probabilities (if applicable), otherwise null.
    • id: The unique identifier of the chat completion.
    • model: The model used for the chat completion, e.g., "gpt-3.5-turbo-0613".

See the example below

    {
        "choices": [
            {
                "finish_reason": null,
                "index": 0,
                "delta": {
                    "content": "The",
                    "role": ""
                },
                "logprobs": null
            }
        ],
        "id": "00000000000000005d487d0ba0cde006",
        "model": "space"
    }

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pinecone_plugin_assistant-0.2.1.tar.gz (129.3 kB view details)

Uploaded Source

Built Distribution

pinecone_plugin_assistant-0.2.1-py3-none-any.whl (196.5 kB view details)

Uploaded Python 3

File details

Details for the file pinecone_plugin_assistant-0.2.1.tar.gz.

File metadata

  • Download URL: pinecone_plugin_assistant-0.2.1.tar.gz
  • Upload date:
  • Size: 129.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.6 Linux/6.8.0-1014-azure

File hashes

Hashes for pinecone_plugin_assistant-0.2.1.tar.gz
Algorithm Hash digest
SHA256 289caa08c71a03b53fd55253f1e3bfe06b95a882812c1e1918e984158c1a987e
MD5 a33a9831637254af83787e88d2ba6890
BLAKE2b-256 293aa76c2d04bf187288120f923a3e93f45f571f455651da79c246bcc5f0d9c5

See more details on using hashes here.

File details

Details for the file pinecone_plugin_assistant-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for pinecone_plugin_assistant-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1dabe579da3c7a70f51294d96b8265c2c0f5683fcf66e7ef6f886fd6fe6b07c3
MD5 1fcb70e184b447e137df815a15ab90ba
BLAKE2b-256 21e0769b7a05dc60a2896a76243a67b258be107732028849163d567e47dc07b9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page