GenAI Techne System (gtsystem) is a low code Python package for crafting GenAI applications quickly
Project description
GenAI Techne System (gtsystem)
A low code Python package for crafting GenAI applications quickly
GenAI Techne is on a mission to help enterprise and professionals excel in the craft of Generative AI. Check out the GenAI Techne Substack where you can read more about our mission, read gtsystem documentation, learn from step-by-step tutorials, and influence the roadmap of gtsystem for your use cases.
Getting Started
The get started using gtsystem
package follow these steps.
Step 1. Install gtsystem package using pip install gtsystem
Step 2. Open a Jupyter notebook and try this sample.
from gtsystem import openai, bedrock
prompt = 'How many faces does a tetrahedron have?'
openai.gpt_text(prompt)
bedrock.llama_text(prompt)
bedrock.claude_text(prompt)
Features and Notebook Samples
The gtsystem
package source is available repository on GitHub.
You can read more about the vision behind gtsystem on the GenAI Techne substack post.
You can learn gtsystem
API by following along the notebook samples included in the gtsystem
repo.
01-evaluate.ipynb
for single statement prompt evaluations across multiple models including OpenAI GPT-4 and Bedrock hosted Claude 2.1 and Llama 2.
02-render.ipynb
for well formatted rendering of the model responses.
03-tasks.ipynb
for loading evaluation tasks - find, list, load prompts by task, including optinal parameter values for temperature and TopP.
04-instrument.ipynb
for instrumenting and comparing multiple models across latency and size of response.
05-benchmark.ipynb
for automating benchmarking the quality of responses from models like Llama and Claude using GPT-4 as an LLM evaluator.
Installing Dependencies
You can install following dependencies to work with gtsystem
based on your needs. Start with our requirements.txt
or create your own. Then run pip install -r requirements.txt
within your environment.
# Python
pandas
markdown
openpyxl
# Jupyter Notebooks
jupyterlab
ipywidgets
# Amazon Bedrock / AWS
boto3
awscli
botocore
# OpenAI / GPT
openai
Amazon Bedrock Setup
To use Amazon Bedrock hosted models like Llama and Claude follow these steps.
Step 1. Login to AWS Console > Launch Identity and Access Management (IAM) > Create a user for Command-Line Interface (CLI) access. Read Bedrock documentation for more details.
Step 2. Install AWS CLI > Run aws configure
in Terminal > Add credentials from Step 1.
Ollama Setup
To use Ollama provided LLMs locally on your laptop follow these steps.
Step 1. Download Ollama Note the memory requirements for each model. 7b models generally require at least 8GB of RAM. 13b models generally require at least 16GB of RAM. 70b models generally require at least 64GB of RAM
Step 2. Find model Ollama library > Run command in terminal to download and run model. Currently gtsystem supports popular models like llama2, mistral, and phi.
OpenAI Setup
To use OpenAI models follow these steps.
Step 1. Signup for OpenAI API access and get the API key.
Step 2. Add OpenAI API Key to your ~/.zshrc
or ~/.bashrc
using export OPENAI_API_KEY="your-key-here"
Basic Python Environment
If you are new to Python then here is how you can get started from scratch.
First, you should be running the latest Python on your system with Python package manager upgraded to the latest.
python --version
# should return Python 3.10.x or higher as on Jan'23
pip --version
# should return pip 22.3.x or higher as on Jan'23
Follow this guide for Mac OS X if you do not have the latest Python. If installing specific version of Python for managing dependencies then follow this thread to install using pyenv
Python version manager. If required upgrade pip to the latest using the following command.
pip install --user --upgrade pip
We will now create a virtual environment for our MLOps setup so that our dependencies are isolated and do not conflict with the system installed packages. We will follow this guide for creating and managing the virtual environment. First change to the directory where we will develop our application.
python -m venv env
If you run ls env you will see following folders and files created.
bin include lib pyvenv.cfg
Now we can activate our virtual environment like so. You will notice that development directory prefixed with the (env) to indicate you are now running in the virtual environment.
. env/bin/activate
You can confirm that you are not running inside the virtual environment with its own Python.
which python
## should return /Users/.../env/bin/python
To leave the virtual environment using deactivate command. Re-enter using same command as earlier.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.