Skip to main content

Run gptscripts from Python apps

Project description

GPTScript Python Module

Introduction

The GPTScript Python module is a library that provides a simple interface to create and run gptscripts within Python applications, and Jupyter notebooks. It allows you to define tools, execute them, and process the responses.

Installation

You can install the GPTScript Python module using pip.

pip install gptscript

On MacOS, Windows X6

SDIST and none-any wheel installations

When installing from the sdist or the none-any wheel, the binary is not packaged by default. You must run the install_gptscript command to install the binary.

install_gptscript

The script is added to the same bin directory as the python executable, so it should be in your path.

Or you can install the gptscript cli from your code by running:

from gptscript.install import install

install()

Using an existing gptscript cli

If you already have the gptscript cli installed, you can use it by setting the envvar:

export GPTSCRIPT_BIN="/path/to/gptscript"

GPTScript

The GPTScript instance allows the caller to run gptscript files, tools, and other operations (see below). Note that the intention is that a single GPTScript instance is all you need for the life of your application, you should call close() on the instance when you are done.

Global Options

When creating a GTPScript instance, you can pass the following global options. These options are also available as run Options. Anything specified as a run option will take precedence over the global option.

  • APIKey: Specify an OpenAI API key for authenticating requests. Defaults to OPENAI_API_KEY environment variable
  • BaseURL: A base URL for an OpenAI compatible API (the default is https://api.openai.com/v1)
  • DefaultModel: The default model to use for chat completion requests
  • DefaultModelProvider: The default model provider to use for chat completion requests
  • Env: Supply the environment variables. Supplying anything here means that nothing from the environment is used. The default is os.environ(). Supplying Env at the run/evaluate level will be treated as "additional."

Run Options

These are optional options that can be passed to the run and evaluate functions. None of the options is required, and the defaults will reduce the number of calls made to the Model API. As noted above, the Global Options are also available to specify here. These options would take precedence.

  • disableCache: Enable or disable caching. Default (False).
  • subTool: Use tool of this name, not the first tool
  • input: Input arguments for the tool run
  • workspace: Directory to use for the workspace, if specified it will not be deleted on exit
  • chatState: The chat state to continue, or null to start a new chat and return the state
  • confirm: Prompt before running potentially dangerous commands
  • prompt: Allow prompting of the user

Tools

The Tool class represents a gptscript tool. The fields align with what you would be able to define in a normal gptscript .gpt file.

Fields

  • name: The name of the tool.
  • description: A description of the tool.
  • tools: Additional tools associated with the main tool.
  • maxTokens: The maximum number of tokens to generate.
  • model: The GPT model to use.
  • cache: Whether to use caching for responses.
  • temperature: The temperature parameter for response generation.
  • arguments: Additional arguments for the tool.
  • internalPrompt: Optional boolean defaults to None.
  • instructions: Instructions or additional information about the tool.
  • jsonResponse: Whether the response should be in JSON format.(If you set this to True, you must say 'json' in the instructions as well.)

Primary Functions

Aside from the list methods there are exec and exec_file methods that allow you to execute a tool and get the responses. Those functions also provide a streaming version of execution if you want to process the output streams in your code as the tool is running.

list_tools()

This function lists the available tools.

from gptscript.gptscript import GPTScript


async def list_tools():
    gptscript = GPTScript()
    tools = await gptscript.list_tools()
    print(tools)
    gptscript.close()

list_models()

This function lists the available GPT models.

from gptscript.gptscript import GPTScript


async def list_models():
    gptscript = GPTScript()
    tools = await gptscript.list_models()
    print(tools)
    gptscript.close()

parse()

Parse a file into a Tool data structure.

from gptscript.gptscript import GPTScript


async def parse_example():
    gptscript = GPTScript()
    tools = await gptscript.parse("/path/to/file")
    print(tools)
    gptscript.close()

parse_tool()

Parse the contents that represents a GPTScript file into a Tool data structure.

from gptscript.gptscript import GPTScript


async def parse_tool_example():
    gptscript = GPTScript()
    tools = await gptscript.parse_tool("Instructions: Say hello!")
    print(tools)
    gptscript.close()

fmt()

Parse convert a tool data structure into a GPTScript file.

from gptscript.gptscript import GPTScript


async def fmt_example():
    gptscript = GPTScript()
    tools = await gptscript.parse_tool("Instructions: Say hello!")
    print(tools)

    contents = gptscript.fmt(tools)
    print(contents)  # This would print "Instructions: Say hello!"
    gptscript.close()

evaluate()

Executes a tool with optional arguments.

from gptscript.gptscript import GPTScript
from gptscript.tool import ToolDef


async def evaluate_example():
    tool = ToolDef(instructions="Who was the president of the United States in 1928?")
    gptscript = GPTScript()

    run = gptscript.evaluate(tool)
    output = await run.text()

    print(output)

    gptscript.close()

run()

Executes a GPT script file with optional input and arguments. The script is relative to the callers source directory.

from gptscript.gptscript import GPTScript


async def evaluate_example():
    gptscript = GPTScript()

    run = gptscript.run("/path/to/file")
    output = await run.text()

    print(output)

    gptscript.close()

Streaming events

GPTScript provides events for the various steps it takes. You can get those events and process them with event_handlers. The evaluate method is used here, but the same functionality exists for the run method.

from gptscript.gptscript import GPTScript
from gptscript.frame import RunFrame, CallFrame, PromptFrame
from gptscript.run import Run


async def process_event(run: Run, event: RunFrame | CallFrame | PromptFrame):
    print(event.__dict__)


async def evaluate_example():
    gptscript = GPTScript()

    run = gptscript.run("/path/to/file", event_handlers=[process_event])
    output = await run.text()

    print(output)

    gptscript.close()

Confirm

Using the confirm: true option allows a user to inspect potentially dangerous commands before they are run. The caller has the ability to allow or disallow their running. In order to do this, a caller should look for the CallConfirm event.

from gptscript.gptscript import GPTScript
from gptscript.frame import RunFrame, CallFrame, PromptFrame
from gptscript.run import Run, RunEventType
from gptscript.confirm import AuthResponse

gptscript = GPTScript()


async def confirm(run: Run, event: RunFrame | CallFrame | PromptFrame):
    if event.type == RunEventType.callConfirm:
        # AuthResponse also has a "message" field to specify why the confirm was denied.
        await gptscript.confirm(AuthResponse(accept=True))


async def evaluate_example():
    run = gptscript.run("/path/to/file", event_handlers=[confirm])
    output = await run.text()

    print(output)

    gptscript.close()

Prompt

Using the prompt: true option allows a script to prompt a user for input. In order to do this, a caller should look for the Prompt event. Note that if a Prompt event occurs when it has not explicitly been allowed, then the run will error.

from gptscript.gptscript import GPTScript
from gptscript.frame import RunFrame, CallFrame, PromptFrame
from gptscript.run import Run
from gptscript.opts import Options
from gptscript.prompt import PromptResponse

gptscript = GPTScript()


async def prompt(run: Run, event: RunFrame | CallFrame | PromptFrame):
    if isinstance(event, PromptFrame):
        # The responses field here is a dictionary of prompt fields to values.
        await gptscript.prompt(PromptResponse(id=event.id, responses={event.fields[0]: "Some value"}))


async def evaluate_example():
    run = gptscript.run("/path/to/file", opts=Options(prompt=True), event_handlers=[prompt])
    output = await run.text()

    print(output)

    gptscript.close()

Example Usage

from gptscript.gptscript import GPTScript
from gptscript.tool import ToolDef

# Create the GPTScript object
gptscript = GPTScript()

# Define a tool
complex_tool = ToolDef(
    tools=["sys.write"],
    jsonResponse=True,
    cache=False,
    instructions="""
    Create three short graphic artist descriptions and their muses.
    These should be descriptive and explain their point of view.
    Also come up with a made-up name, they each should be from different
    backgrounds and approach art differently.
    the JSON response format should be:
    {
        artists: [{
            name: "name"
            description: "description"
        }]
    }
    """
)

# Execute the complex tool
run = gptscript.evaluate(complex_tool)
print(await run.text())

gptscript.close()

Example 2 multiple tools

In this example, multiple tool are provided to the exec function. The first tool is the only one that can exclude the name field. These will be joined and passed into the gptscript as a single gptscript.

from gptscript.gptscript import GPTScript
from gptscript.tool import ToolDef

gptscript = GPTScript()

tools = [
    ToolDef(tools=["echo"], instructions="echo hello times"),
    ToolDef(
        name="echo",
        tools=["sys.exec"],
        description="Echo's the input",
        args={"input": "the string input to echo"},
        instructions="""
        #!/bin/bash
        echo ${input}
        """,
    ),
]

run = gptscript.evaluate(tools)

print(await run.text())

gptscript.close()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gptscript-0.9.5rc3.tar.gz (29.3 kB view details)

Uploaded Source

Built Distributions

gptscript-0.9.5rc3-py3-none-win_amd64.whl (10.4 MB view details)

Uploaded Python 3 Windows x86-64

gptscript-0.9.5rc3-py3-none-macosx_10_9_universal2.whl (20.0 MB view details)

Uploaded Python 3 macOS 10.9+ universal2 (ARM64, x86-64)

gptscript-0.9.5rc3-py3-none-any.whl (24.1 kB view details)

Uploaded Python 3

File details

Details for the file gptscript-0.9.5rc3.tar.gz.

File metadata

  • Download URL: gptscript-0.9.5rc3.tar.gz
  • Upload date:
  • Size: 29.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.4

File hashes

Hashes for gptscript-0.9.5rc3.tar.gz
Algorithm Hash digest
SHA256 bf645dec3b7adee7bc2b90d7a000c9783d8bdc546e77141b257dec88fa4d1a38
MD5 5dce848173702e62a870dd0c11c694f7
BLAKE2b-256 be3acd4e9f7017d8074c655e65ceb00354dd40c6ebba784d7bab89c8cecba56f

See more details on using hashes here.

File details

Details for the file gptscript-0.9.5rc3-py3-none-win_amd64.whl.

File metadata

File hashes

Hashes for gptscript-0.9.5rc3-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 8604484e66c7901a6b981066fb3aec18935ccb204f51c46677c4595aa4eead3c
MD5 85f1fbc417cc387fea65e22d0de76800
BLAKE2b-256 d77b80b2b5e30e5de0be35922bd30b2145e9f29efb422ff5cf14c9469820e3fa

See more details on using hashes here.

File details

Details for the file gptscript-0.9.5rc3-py3-none-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for gptscript-0.9.5rc3-py3-none-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 2e03753d1babb78f53b87a71d1f6b1d4758d12d324f8b081d1feb8fafa158f05
MD5 35d48115899693841160a1103ea7b3a3
BLAKE2b-256 1e72d53a6da4040e543f9a907af4c8f4e21375651288246f4a89fa78e3d017b9

See more details on using hashes here.

File details

Details for the file gptscript-0.9.5rc3-py3-none-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for gptscript-0.9.5rc3-py3-none-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 5288ab60c3a5edefcf96cf3b1bf160b95f36c41457533d5b7c879cc8917f083e
MD5 d48d5b44f543ad40efbd2c364b4429a6
BLAKE2b-256 c5e850d56ce3ca04e5f598468f754d5807140a97656f745dc51e011dc2fa86f0

See more details on using hashes here.

File details

Details for the file gptscript-0.9.5rc3-py3-none-macosx_10_9_universal2.whl.

File metadata

File hashes

Hashes for gptscript-0.9.5rc3-py3-none-macosx_10_9_universal2.whl
Algorithm Hash digest
SHA256 c00ddab8adc928db29ac0498e3d95360c856b9b2cef2f3a109cc4077c3fb8bee
MD5 5e4b6eb5c304e79d38b371d36057c4d0
BLAKE2b-256 954ea7f3102c0a86d85ee571e3eb3f0eb36560cb5c6a764aaabc4790f7f91019

See more details on using hashes here.

File details

Details for the file gptscript-0.9.5rc3-py3-none-any.whl.

File metadata

  • Download URL: gptscript-0.9.5rc3-py3-none-any.whl
  • Upload date:
  • Size: 24.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.4

File hashes

Hashes for gptscript-0.9.5rc3-py3-none-any.whl
Algorithm Hash digest
SHA256 2e4107a72ae6da99ce3256ffb72d8294857c2356ecf2c600b18c89c321cdb269
MD5 ef691bf5b3d44f4a67def11d6bef2644
BLAKE2b-256 d0167ee5eefb3a3fd818ea99947652a5367f9431b1cddd7bf8db1eeaa9cd65ff

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page