Skip to main content

Skyvern integration for LlamaIndex

Project description

Table of Contents generated with DocToc

Skyvern LlamaIndex

This is a LlamaIndex integration for Skyvern.

Installation

pip install skyvern-llamaindex

Usage

Run a task(sync) with skyvern agent (calling skyvern agent function directly in the tool)

sync task won't return until the task is finished.

:warning: :warning: if you want to run this code block, you need to run skyvern init --openai-api-key <your_openai_api_key> command in your terminal to set up skyvern first.

import asyncio
from dotenv import load_dotenv
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from skyvern_llamaindex.agent import SkyvernToolSpec

# load OpenAI API key from .env
load_dotenv()

skyvern_tool = SkyvernToolSpec()

tools = skyvern_tool.to_tool_list(["run_task"])

agent = OpenAIAgent.from_tools(
    tools=tools,
    llm=OpenAI(model="gpt-4o"),
    verbose=True,
    max_function_calls=10,
)

# to run skyvern agent locally, must run `skyvern init` first
response = agent.chat("Run the task with skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
print(response)

Dispatch a task(async) with skyvern agent (calling skyvern agent function directly in the tool)

dispatch task will return immediately and the task will be running in the background. You can use get_task tool to poll the task information until the task is finished.

:warning: :warning: if you want to run this code block, you need to run skyvern init --openai-api-key <your_openai_api_key> command in your terminal to set up skyvern first.

import asyncio
from dotenv import load_dotenv
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
from skyvern_llamaindex.agent import SkyvernToolSpec

async def sleep(seconds: int) -> str:
    await asyncio.sleep(seconds)
    return f"Slept for {seconds} seconds"

# load OpenAI API key from .env
load_dotenv()

skyvern_tool = SkyvernToolSpec()

sleep_tool = FunctionTool.from_defaults(
    async_fn=sleep,
    description="Sleep for a given number of seconds",
    name="sleep",
)

tools = skyvern_tool.to_tool_list(["dispatch_task", "get_task"])
tools.append(sleep_tool)

agent = OpenAIAgent.from_tools(
    tools=tools,
    llm=OpenAI(model="gpt-4o"),
    verbose=True,
    max_function_calls=10,
)

response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
print(response)

Run a task(sync) with skyvern client (calling skyvern OpenAPI in the tool)

sync task won't return until the task is finished.

no need to run skyvern init command in your terminal to set up skyvern before using this integration.

import asyncio
from dotenv import load_dotenv
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from skyvern_llamaindex.client import SkyvernToolSpec


async def sleep(seconds: int) -> str:
    await asyncio.sleep(seconds)
    return f"Slept for {seconds} seconds"

# load OpenAI API key from .env
load_dotenv()

skyvern_client_tool = SkyvernToolSpec(
    credential="<your_organization_api_key>",
)

tools = skyvern_client_tool.to_tool_list(["run_task"])

agent = OpenAIAgent.from_tools(
    tools=tools,
    llm=OpenAI(model="gpt-4o"),
    verbose=True,
    max_function_calls=10,
)

response = agent.chat("Run the task with skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
print(response)

Dispatch a task(async) with skyvern client (calling skyvern OpenAPI in the tool)

dispatch task will return immediately and the task will be running in the background. You can use get_task tool to poll the task information until the task is finished.

no need to run skyvern init command in your terminal to set up skyvern before using this integration.

import asyncio
from dotenv import load_dotenv
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
from skyvern_llamaindex.client import SkyvernToolSpec


async def sleep(seconds: int) -> str:
    await asyncio.sleep(seconds)
    return f"Slept for {seconds} seconds"

# load OpenAI API key from .env
load_dotenv()

skyvern_client_tool = SkyvernToolSpec(
    credential="<your_organization_api_key>",
)

sleep_tool = FunctionTool.from_defaults(
    async_fn=sleep,
    description="Sleep for a given number of seconds",
    name="sleep",
)

tools = skyvern_client_tool.to_tool_list(["dispatch_task", "get_task"])
tools.append(sleep_tool)

agent = OpenAIAgent.from_tools(
    tools=tools,
    llm=OpenAI(model="gpt-4o"),
    verbose=True,
    max_function_calls=10,
)

response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
print(response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

skyvern_llamaindex-0.0.2.tar.gz (3.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

skyvern_llamaindex-0.0.2-py3-none-any.whl (5.6 kB view details)

Uploaded Python 3

File details

Details for the file skyvern_llamaindex-0.0.2.tar.gz.

File metadata

  • Download URL: skyvern_llamaindex-0.0.2.tar.gz
  • Upload date:
  • Size: 3.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for skyvern_llamaindex-0.0.2.tar.gz
Algorithm Hash digest
SHA256 96f5e7247841f6ba6ed3d134a23e766f1658b5cd3a6dbbae99843d47e42736f7
MD5 7510984b457f5de6723e34704229beff
BLAKE2b-256 6508a06d86b4e8f2fa5145a18d0a2b191d5455a12fedfb98c03e5e4778254134

See more details on using hashes here.

File details

Details for the file skyvern_llamaindex-0.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for skyvern_llamaindex-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 58e52587fb677f25b6b2f49fcf1e835c543945c1a6768f83a31000bd7c20e48d
MD5 d4bacb6d5cc519b3d358fdf91a10b3b7
BLAKE2b-256 cce02b0f3ff96aa382be161b21bed8706d9f00ff26412b06a1145701b3f8f9b9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page