A flexible, easy-to-use library for running and optimizing prompts for Large Language Models (LLMs).
Project description
SAMMO (📘User Guide)
A flexible, easy-to-use library for running and optimizing prompts for Large Language Models (LLMs).
How to Get Started
Go to the user guide for examples, how-tos, and API reference.
pip install sammo
Use Cases
SAMMO is designed to support
- Efficient data labeling: Supports minibatching by packing and parsing multiple datapoints into a single prompt.
- Prompt prototyping and engineering: Re-usable components and prompt structures to quickly build and test new prompts.
- Instruction optimization: Optimize instructions to do better on a given task.
- Prompt compression: Compress prompts while maintaining performance.
- Large-scale prompt execution: parallelization and rate-limiting out-of-the-box so you can run many queries in parallel and at scale without overwhelming the LLM API.
It is less useful if you want to build
- Interactive, agent-based LLM applications (→ check out AutoGen)
- Interactive, production-ready LLM applications (→ check out LangChain)
Example
This is extending the chat dialog example from Guidance by running queries in parallel.
runner = OpenAIChat(model_id="gpt-3.5-turbo", api_config=API_CONFIG)
expert_names = GenerateText(
Template(
"I want a response to the following question:"
"{{input}}\n"
"Name 3 world-class experts (past or present) who would be great at answering this? Don't answer the question yet."
),
system_prompt="You are a helpful and terse assistant.",
randomness=0,
max_tokens=300,
)
joint_answer = GenerateText(
"Great, now please answer the question as if these experts had collaborated in writing a joint anonymous answer.",
history=expert_names,
randomness=0,
max_tokens=500,
)
questions = [
"How can I be more productive?",
"What will AI look like in 10 years?",
"How do we end world hunger?",
]
print(Output(joint_answer).run(runner, questions))
Licence
This project is licensed under MIT.
Authors
SAMMO
was written by Tobias Schnabel.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file sammo-0.1.0.4.tar.gz
.
File metadata
- Download URL: sammo-0.1.0.4.tar.gz
- Upload date:
- Size: 54.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.6.1 CPython/3.11.5 Windows/10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0b846da70c787d437734e594803965458eb66b219de58595398d96b6f57126dc |
|
MD5 | 2d5f863ba12f3aedf913aa16f874648c |
|
BLAKE2b-256 | 79e6a9fb51974457ffff3515804232c0825f04e2ed46869f0c04fcfc53c77808 |
File details
Details for the file sammo-0.1.0.4-py3-none-any.whl
.
File metadata
- Download URL: sammo-0.1.0.4-py3-none-any.whl
- Upload date:
- Size: 61.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.6.1 CPython/3.11.5 Windows/10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2b10cf9c6471a7e5af20b74e74f697194d5b1105761bfb494ff216872ead8e7a |
|
MD5 | 6186d6475db8b406ac15fabbadbc8bac |
|
BLAKE2b-256 | 5152aff96ca0919d2d70529bcbb2d410774d13592e42729ce2394b06af13ab53 |