Skip to main content

Run gptscripts from Python apps

Project description

GPTScript Python Module

Introduction

The GPTScript Python module is a library that provides a simple interface to create and run gptscripts within Python applications, and Jupyter notebooks. It allows you to define tools, execute them, and process the responses.

Installation

You can install the GPTScript Python module using pip.

pip install gptscript

On MacOS, Windows X6

SDIST and none-any wheel installations

When installing from the sdist or the none-any wheel, the binary is not packaged by default. You must run the install_gptscript command to install the binary.

install_gptscript

The script is added to the same bin directory as the python executable, so it should be in your path.

Or you can install the gptscript cli from your code by running:

from gptscript.install import install
install()

Using an existing gptscript cli

If you already have the gptscript cli installed, you can use it by setting the envvar:

export GPTSCRIPT_BIN="/path/to/gptscript"

Using the Module

The module requires the OPENAI_API_KEY environment variable to be set with your OPENAI key. You can set it in your shell or in your code.

export OPENAI_AI_KEY="your-key"

Tools

The Tool class represents a gptscript tool. The fields align with what you would be able to define in a normal gptscript .gpt file.

Fields

  • name: The name of the tool.
  • description: A description of the tool.
  • tools: Additional tools associated with the main tool.
  • max_tokens: The maximum number of tokens to generate.
  • model: The GPT model to use.
  • cache: Whether to use caching for responses.
  • temperature: The temperature parameter for response generation.
  • args: Additional arguments for the tool.
  • internal_prompt: Boolean defaults to false.
  • instructions: Instructions or additional information about the tool.
  • json_response: Whether the response should be in JSON format.(If you set this to True, you must say 'json' in the instructions as well.)

Primary Functions

Aside from the list methods there are exec and exec_file methods that allow you to execute a tool and get the responses. Those functions also provide a streaming version of execution if you want to process the output streams in your code as the tool is running.

Opts

You can pass the following options to the exec and exec_file functions:

opts= { "cache": True(default)|False, "cache-dir": "", }

Cache can be set to true or false to enable or disable caching globally or it can be set at the individual tool level. The cache-dir can be set to a directory to use for caching. If not set, the default cache directory will be used.

list_models()

This function lists the available GPT models.

from gptscript.command import list_models

models = list_models()
print(models)

list_tools()

This function lists the available tools.

from gptscript.command import list_tools

tools = list_tools()
print(tools)

exec(tool, opts)

This function executes a tool and returns the response.

from gptscript.command import exec
from gptscript.tool import Tool

tool = Tool(
    json_response=True,
    instructions="""
Create three short graphic artist descriptions and their muses. 
These should be descriptive and explain their point of view.
Also come up with a made up name, they each should be from different
backgrounds and approach art differently.
the response should be in JSON and match the format:
{
   artists: [{
      name: "name"
      description: "description"
   }]
}
""",
    )


response = exec(tool)
print(response)

exec_file(tool_path, input="", opts)

This function executes a tool from a file and returns the response. The input values are passed to the tool as args.

from gptscript.command import exec_file

response = exec_file("./example.gpt")
print(response)

stream_exec(tool, opts)

This function streams the execution of a tool and returns the output, error, and process wait function. The streams must be read from.

from gptscript.command import stream_exec
from gptscript.tool import Tool

tool = Tool(
    json_response=True,
    instructions="""
Create three short graphic artist descriptions and their muses. 
These should be descriptive and explain their point of view.
Also come up with a made up name, they each should be from different
backgrounds and approach art differently.
the response should be in JSON and match the format:
{
   artists: [{
      name: "name"
      description: "description"
   }]
}
""",
    )

def print_output(out, err):
    # Error stream has the debug info that is useful to see
    for line in err:
        print(line)

    for line in out:
        print(line)

out, err, wait = stream_exec(tool)
print_output(out, err)
wait()

stream_exec_file(tool_path, input="",opts)

This function streams the execution of a tool from a file and returns the output, error, and process wait function. The input values are passed to the tool as args.

from gptscript.command import stream_exec_file

def print_output(out, err):
    # Error stream has the debug info that is useful to see
    for line in err:
        print(line)

    for line in out:
        print(line)

out, err, wait = stream_exec_file("./init.gpt")
print_output(out, err)
wait()

Example Usage

from gptscript.command import exec
from gptscript.tool import FreeForm, Tool

# Define a simple tool
simple_tool = FreeForm(
    content="""
What is the capital of the United States?
"""
)

# Define a complex tool
complex_tool = Tool(
    tools=["sys.write"],
    json_response=True,
    cache=False,
    instructions="""
    Create three short graphic artist descriptions and their muses.
    These should be descriptive and explain their point of view.
    Also come up with a made-up name, they each should be from different
    backgrounds and approach art differently.
    the JSON response format should be:
    {
        artists: [{
            name: "name"
            description: "description"
        }]
    }
    """
)

# Execute the complex tool
response, err = exec(complex_tool)
print(err)
print(response)

# Execute the simple tool
resp, err = exec(simple_tool)
print(err)
print(resp)

Example 2 multiple tools

In this example, multiple tool are provided to the exec function. The first tool is the only one that can exclude the name field. These will be joined and passed into the gptscript as a single gpt script.

from gptscript.command import exec
from gptscript.tool import Tool

tools = [
    Tool(tools=["echo"], instructions="echo hello times"),
    Tool(
        name="echo",
        tools=["sys.exec"],
        description="Echo's the input",
        args={"input": "the string input to echo"},
        instructions="""
        #!/bin/bash
        echo ${input}
        """,
    ),
]

resp, err = exec(tools)
print(err)
print(resp)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gptscript-0.4.1.tar.gz (17.8 kB view details)

Uploaded Source

Built Distributions

gptscript-0.4.1-py3-none-win_amd64.whl (7.2 MB view details)

Uploaded Python 3 Windows x86-64

gptscript-0.4.1-py3-none-macosx_10_9_universal2.whl (14.0 MB view details)

Uploaded Python 3 macOS 10.9+ universal2 (ARM64, x86-64)

gptscript-0.4.1-py3-none-any.whl (15.0 kB view details)

Uploaded Python 3

File details

Details for the file gptscript-0.4.1.tar.gz.

File metadata

  • Download URL: gptscript-0.4.1.tar.gz
  • Upload date:
  • Size: 17.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for gptscript-0.4.1.tar.gz
Algorithm Hash digest
SHA256 a59ccc38ad225034efe09d0de34563735b0c1beb0bc980c25cc3529ad2585bbb
MD5 99dc9078093a4c9e625c506f35afc90e
BLAKE2b-256 0f6c33d162da310bf88729109b3224e121e8e9aed69574ca040c314e62f50d7c

See more details on using hashes here.

File details

Details for the file gptscript-0.4.1-py3-none-win_amd64.whl.

File metadata

  • Download URL: gptscript-0.4.1-py3-none-win_amd64.whl
  • Upload date:
  • Size: 7.2 MB
  • Tags: Python 3, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for gptscript-0.4.1-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 14503e58dc238f2acac9bf2d800b38e7e8e8481ee190b75844f9f882324b7587
MD5 8441b9f0d253e8908383dff37d3b56cf
BLAKE2b-256 4e068e692576a8ec3024011e357f483a915f679d85554010ab40c9c90e597186

See more details on using hashes here.

File details

Details for the file gptscript-0.4.1-py3-none-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for gptscript-0.4.1-py3-none-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 bbaa2b49f1cce1d2322b905c97e3dce50ce00130ac3ff208005a82781e4795c6
MD5 cec2ebc90c815d8c634be24d8cc3925e
BLAKE2b-256 aa350911d2e0659b484aecd110d8211f89a68ac7899a208b4ac913f21158a0c8

See more details on using hashes here.

File details

Details for the file gptscript-0.4.1-py3-none-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for gptscript-0.4.1-py3-none-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 a58bd404a203bedcba949576d19100681fe2c942ee81e69503344003cb7289b4
MD5 0ca8079ceefa20db3bf8c281a7d20df0
BLAKE2b-256 5f9c5053db141499a437e469899cc62afa57823b3839950894f3e68c62bd9621

See more details on using hashes here.

File details

Details for the file gptscript-0.4.1-py3-none-macosx_10_9_universal2.whl.

File metadata

File hashes

Hashes for gptscript-0.4.1-py3-none-macosx_10_9_universal2.whl
Algorithm Hash digest
SHA256 9011ef36ee974857b87829c9b9e208fbeb06f1f002003a1368cd19cc8759c60e
MD5 e572c271b007f28d700854680954b925
BLAKE2b-256 15e53692f68d4a7865587e8f1b10e5a244939e1e4fa7dcda6d88454bd4a85870

See more details on using hashes here.

File details

Details for the file gptscript-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: gptscript-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 15.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for gptscript-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 465213bc71033545dbaedae913b73495a02760227db20d5085076ce99e7c78de
MD5 48e37498648c05b8bcb047df9da581a0
BLAKE2b-256 23eb1976e519d387c41c16bfc9f23b23c31bfe2f0087a204e97072c8b259cb57

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page