Skip to main content

Skyvern integration for LlamaIndex

Project description

Table of Contents generated with DocToc

Skyvern LlamaIndex

This is a LlamaIndex integration for Skyvern.

Installation

pip install skyvern-llamaindex

Usage

Run a task(sync) with skyvern agent (calling skyvern agent function directly in the tool)

sync task won't return until the task is finished.

:warning: :warning: if you want to run this code block, you need to run skyvern init --openai-api-key <your_openai_api_key> command in your terminal to set up skyvern first.

import asyncio
from dotenv import load_dotenv
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from skyvern_llamaindex.agent import SkyvernAgentToolSpec

# load OpenAI API key from .env
load_dotenv()

skyvern_tool = SkyvernAgentToolSpec()

tools = skyvern_tool.to_tool_list(["run_task_v2"])

agent = OpenAIAgent.from_tools(
    tools=tools,
    llm=OpenAI(model="gpt-4o"),
    verbose=True,
    max_function_calls=10,
)

# to run skyvern agent locally, must run `skyvern init` first
response = agent.chat("Run the task with skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
print(response)

Run a task(async) with skyvern agent (calling skyvern agent function directly in the tool)

async task will return immediately and the task will be running in the background. You can use get_task_v2 tool to poll the task information until the task is finished.

:warning: :warning: if you want to run this code block, you need to run skyvern init --openai-api-key <your_openai_api_key> command in your terminal to set up skyvern first.

import asyncio
from dotenv import load_dotenv
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
from skyvern_llamaindex.agent import SkyvernAgentToolSpec

async def sleep(seconds: int) -> str:
    await asyncio.sleep(seconds)
    return f"Slept for {seconds} seconds"

# load OpenAI API key from .env
load_dotenv()

skyvern_tool = SkyvernAgentToolSpec()

sleep_tool = FunctionTool.from_defaults(
    async_fn=sleep,
    description="Sleep for a given number of seconds",
    name="sleep",
)

tools = skyvern_tool.to_tool_list(["queue_task_v2", "get_task_v2"])
tools.append(sleep_tool)

agent = OpenAIAgent.from_tools(
    tools=tools,
    llm=OpenAI(model="gpt-4o"),
    verbose=True,
    max_function_calls=10,
)

response = agent.chat("Queue a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
print(response)

Run a task(sync) with skyvern client (calling skyvern OpenAPI in the tool)

sync task won't return until the task is finished.

no need to run skyvern init command in your terminal to set up skyvern before using this integration.

import asyncio
from dotenv import load_dotenv
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from skyvern_llamaindex.client import SkyvernClientToolSpec


async def sleep(seconds: int) -> str:
    await asyncio.sleep(seconds)
    return f"Slept for {seconds} seconds"

# load OpenAI API key from .env
load_dotenv()

skyvern_client_tool = SkyvernClientToolSpec(
    credential="<your_organization_api_key>",
)

tools = skyvern_client_tool.to_tool_list(["run_task_v2"])

agent = OpenAIAgent.from_tools(
    tools=tools,
    llm=OpenAI(model="gpt-4o"),
    verbose=True,
    max_function_calls=10,
)

response = agent.chat("Run the task with skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
print(response)

Run a task(async) with skyvern client (calling skyvern OpenAPI in the tool)

async task will return immediately and the task will be running in the background. You can use GetSkyvernClientTaskV2Tool tool to poll the task information until the task is finished.

no need to run skyvern init command in your terminal to set up skyvern before using this integration.

import asyncio
from dotenv import load_dotenv
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
from skyvern_llamaindex.client import SkyvernClientToolSpec


async def sleep(seconds: int) -> str:
    await asyncio.sleep(seconds)
    return f"Slept for {seconds} seconds"

# load OpenAI API key from .env
load_dotenv()

skyvern_client_tool = SkyvernClientToolSpec(
    credential="<your_organization_api_key>",
)

sleep_tool = FunctionTool.from_defaults(
    async_fn=sleep,
    description="Sleep for a given number of seconds",
    name="sleep",
)

tools = skyvern_client_tool.to_tool_list(["queue_task_v2", "get_task_v2"])
tools.append(sleep_tool)

agent = OpenAIAgent.from_tools(
    tools=tools,
    llm=OpenAI(model="gpt-4o"),
    verbose=True,
    max_function_calls=10,
)

response = agent.chat("Queue a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
print(response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

skyvern_llamaindex-0.0.1.tar.gz (3.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

skyvern_llamaindex-0.0.1-py3-none-any.whl (5.4 kB view details)

Uploaded Python 3

File details

Details for the file skyvern_llamaindex-0.0.1.tar.gz.

File metadata

  • Download URL: skyvern_llamaindex-0.0.1.tar.gz
  • Upload date:
  • Size: 3.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for skyvern_llamaindex-0.0.1.tar.gz
Algorithm Hash digest
SHA256 a9eed921453161d95cc332330ff146062399e049fd22d807cb482ddc452c31e1
MD5 14d2df39bb921bebb23522439cf7573c
BLAKE2b-256 2f9782810b524e2fc00a3ec313c9b8472e0e4aea9925e7c4fa0be82d1d030657

See more details on using hashes here.

File details

Details for the file skyvern_llamaindex-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for skyvern_llamaindex-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 833db02271ef1fd6bdccd6f0d6a01a9eea6fa31a3ed33e13a57f246ad0938b04
MD5 4480c9d184d59e8107c9dc71b33cbed3
BLAKE2b-256 fd8810b5ee5be237135a0709d692676945c2e575521c8813e7f62195d198d456

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page