Skip to main content

General library wrapping and calling LLMs for prompt engineering.

Project description

PREWL

Prompt Engineering Wrapper for LLMs (PREWL): A library for rapidly prototyping LLM-based applications via prompt engineering for NLU.

Usage

import prewl, json

# Load configuration for backend (e.g., GPT-3 credentials)
prewl.configure("config.json")

# Load the example prompts
examples =  prewl.load_promps("prompts.json")

PATTERN = """
Text: {text}
Sentiment: {sentiment}
"""

# Prompts objects
prompts = prewl.load_prompts(PATTERN, examples, output='sentiment')

# Build the backend-driven model that will be used
model = prewl.train(prompts) # Model object


# Use the model to build a prompt for the LLM, fetch the completion, and parse it
new_input = "This movie was off the hook!"
resp = model.infer(new_input)


print("\n New input: ", new_input)
print("Prediction: ", resp)
print()

More examples can be found in the examples/ directory.

Contributing

Coming soon...

Requirements

Setting up virtual environment

python -m venv .env
source .env/bin/activate

Installing torch

pip install torch --extra-index-url https://download.pytorch.org/whl/cu113

Citing This Work

Coming soon...

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prewl-0.0.3.tar.gz (5.8 kB view hashes)

Uploaded Source

Built Distribution

prewl-0.0.3-py3-none-any.whl (7.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page