A flexible, easy-to-use library for running and optimizing prompts for Large Language Models (LLMs).
Project description
SAMMO (📘User Guide)
A flexible, easy-to-use library for running and optimizing prompts for Large Language Models (LLMs).
How to Get Started
Go to the user guide for examples, how-tos, and API reference.
pip install sammo
Use Cases
SAMMO is designed to support
- Efficient data labeling: Supports minibatching by packing and parsing multiple datapoints into a single prompt.
- Prompt prototyping and engineering: Re-usable components and prompt structures to quickly build and test new prompts.
- Instruction optimization: Optimize instructions to do better on a given task.
- Prompt compression: Compress prompts while maintaining performance.
- Large-scale prompt execution: parallelization and rate-limiting out-of-the-box so you can run many queries in parallel and at scale without overwhelming the LLM API.
It is less useful if you want to build
- Interactive, agent-based LLM applications (→ check out AutoGen)
- Interactive, production-ready LLM applications (→ check out LangChain)
Example
This is extending the chat dialog example from Guidance by running queries in parallel.
runner = OpenAIChat(model_id="gpt-3.5-turbo", api_config=API_CONFIG)
expert_names = GenerateText(
Template(
"I want a response to the following question:"
"{{input}}\n"
"Name 3 world-class experts (past or present) who would be great at answering this? Don't answer the question yet."
),
system_prompt="You are a helpful and terse assistant.",
randomness=0,
max_tokens=300,
)
joint_answer = GenerateText(
"Great, now please answer the question as if these experts had collaborated in writing a joint anonymous answer.",
history=expert_names,
randomness=0,
max_tokens=500,
)
questions = [
"How can I be more productive?",
"What will AI look like in 10 years?",
"How do we end world hunger?",
]
print(Output(joint_answer).run(runner, questions))
Licence
This project is licensed under MIT.
Authors
SAMMO
was written by Tobias Schnabel.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file sammo-0.1.0.2.tar.gz
.
File metadata
- Download URL: sammo-0.1.0.2.tar.gz
- Upload date:
- Size: 54.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.6.1 CPython/3.11.5 Windows/10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b69d950c39a5a338043c53bf99fc53465cefce83336e08a5c59c25b3bc71c79e |
|
MD5 | 6711f956138cb7ba80fe870c200a281c |
|
BLAKE2b-256 | 45c29d4870131c115c390ba5e0e3a4a6048f94c453895845faaa22533bccdade |
File details
Details for the file sammo-0.1.0.2-py3-none-any.whl
.
File metadata
- Download URL: sammo-0.1.0.2-py3-none-any.whl
- Upload date:
- Size: 61.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.6.1 CPython/3.11.5 Windows/10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2afba694f0e2ccb26a8d6623d24a12069845da90fb5b4a7f69b5472443b69673 |
|
MD5 | 9f6c90cced7663e9fbe654307e7c1918 |
|
BLAKE2b-256 | d58cb369c1221700a987289cf666ff5455ab584abfa309df2aff8d78ce541c9f |