Skyvern integration for LlamaIndex
Project description
Table of Contents generated with DocToc
- Skyvern LlamaIndex
- Installation
- Usage
- Run a task(sync) with skyvern agent (calling skyvern agent function directly in the tool)
- Dispatch a task(async) with skyvern agent (calling skyvern agent function directly in the tool)
- Run a task(sync) with skyvern client (calling skyvern OpenAPI in the tool)
- Dispatch a task(async) with skyvern client (calling skyvern OpenAPI in the tool)
Skyvern LlamaIndex
This is a LlamaIndex integration for Skyvern.
Installation
pip install skyvern-llamaindex
Usage
Run a task(sync) with skyvern agent (calling skyvern agent function directly in the tool)
sync task won't return until the task is finished.
:warning: :warning: if you want to run this code block, you need to run skyvern init --openai-api-key <your_openai_api_key> command in your terminal to set up skyvern first.
import asyncio
from dotenv import load_dotenv
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from skyvern_llamaindex.agent import SkyvernTaskToolSpec
# load OpenAI API key from .env
load_dotenv()
skyvern_tool = SkyvernTaskToolSpec()
tools = skyvern_tool.to_tool_list(["run"])
agent = OpenAIAgent.from_tools(
tools=tools,
llm=OpenAI(model="gpt-4o"),
verbose=True,
max_function_calls=10,
)
# to run skyvern agent locally, must run `skyvern init` first
response = agent.chat("Run the task with skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
print(response)
Dispatch a task(async) with skyvern agent (calling skyvern agent function directly in the tool)
dispatch task will return immediately and the task will be running in the background. You can use
gettool to poll the task information until the task is finished.
:warning: :warning: if you want to run this code block, you need to run skyvern init --openai-api-key <your_openai_api_key> command in your terminal to set up skyvern first.
import asyncio
from dotenv import load_dotenv
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
from skyvern_llamaindex.agent import SkyvernTaskToolSpec
async def sleep(seconds: int) -> str:
await asyncio.sleep(seconds)
return f"Slept for {seconds} seconds"
# load OpenAI API key from .env
load_dotenv()
skyvern_tool = SkyvernTaskToolSpec()
sleep_tool = FunctionTool.from_defaults(
async_fn=sleep,
description="Sleep for a given number of seconds",
name="sleep",
)
tools = skyvern_tool.to_tool_list(["dispatch", "get"])
tools.append(sleep_tool)
agent = OpenAIAgent.from_tools(
tools=tools,
llm=OpenAI(model="gpt-4o"),
verbose=True,
max_function_calls=10,
)
response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
print(response)
Run a task(sync) with skyvern client (calling skyvern OpenAPI in the tool)
sync task won't return until the task is finished.
no need to run skyvern init command in your terminal to set up skyvern before using this integration.
import asyncio
from dotenv import load_dotenv
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from skyvern_llamaindex.client import SkyvernTaskToolSpec
async def sleep(seconds: int) -> str:
await asyncio.sleep(seconds)
return f"Slept for {seconds} seconds"
# load OpenAI API key from .env
load_dotenv()
skyvern_client_tool = SkyvernTaskToolSpec(
credential="<your_organization_api_key>",
)
tools = skyvern_client_tool.to_tool_list(["run"])
agent = OpenAIAgent.from_tools(
tools=tools,
llm=OpenAI(model="gpt-4o"),
verbose=True,
max_function_calls=10,
)
response = agent.chat("Run the task with skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
print(response)
Dispatch a task(async) with skyvern client (calling skyvern OpenAPI in the tool)
dispatch task will return immediately and the task will be running in the background. You can use
gettool to poll the task information until the task is finished.
no need to run skyvern init command in your terminal to set up skyvern before using this integration.
import asyncio
from dotenv import load_dotenv
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
from skyvern_llamaindex.client import SkyvernTaskToolSpec
async def sleep(seconds: int) -> str:
await asyncio.sleep(seconds)
return f"Slept for {seconds} seconds"
# load OpenAI API key from .env
load_dotenv()
skyvern_client_tool = SkyvernTaskToolSpec(
credential="<your_organization_api_key>",
)
sleep_tool = FunctionTool.from_defaults(
async_fn=sleep,
description="Sleep for a given number of seconds",
name="sleep",
)
tools = skyvern_client_tool.to_tool_list(["dispatch", "get"])
tools.append(sleep_tool)
agent = OpenAIAgent.from_tools(
tools=tools,
llm=OpenAI(model="gpt-4o"),
verbose=True,
max_function_calls=10,
)
response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
print(response)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file skyvern_llamaindex-0.0.3.tar.gz.
File metadata
- Download URL: skyvern_llamaindex-0.0.3.tar.gz
- Upload date:
- Size: 3.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
10881f44fde9b0a2768d5b3128ec863d4c050d272409ee50b1e57b6fcf434b61
|
|
| MD5 |
c3c1caff7185a8e85b43e4e660ef49dc
|
|
| BLAKE2b-256 |
e2ed75da83ce13aa911591af2038abca96541f61ee20a25a22902e5a834c15e6
|
File details
Details for the file skyvern_llamaindex-0.0.3-py3-none-any.whl.
File metadata
- Download URL: skyvern_llamaindex-0.0.3-py3-none-any.whl
- Upload date:
- Size: 5.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0b2b960db30da4559843a6e0a4beaeb580e5615eb26766ab43f5323e0f9bbb9c
|
|
| MD5 |
327588c26c8a08f0245c78e848b689e0
|
|
| BLAKE2b-256 |
67be23f6c5d45254d434776b2beeed26df891e792c896d8cb14c9ea9a29a31d4
|