Skip to main content

A new package that transforms unstructured text descriptions of software performance benchmarks into standardized, comparable metrics. Users input text descriptions of performance tests, and the packa

Project description

perfbenchify

PyPI version License: MIT Downloads LinkedIn

A lightweight package that converts unstructured text descriptions of software performance benchmarks into standardized, comparable metrics. The library uses the llmatch-messages framework to extract key performance indicators—like speed improvements, latency reductions, and throughput increases—from natural‑language benchmark descriptions. Once extracted, the results are returned in a consistent format, enabling developers to easily compare different software frameworks or libraries and make informed performance optimization decisions.

📦 Installation

pip install perfbenchify

🚀 Quick Start

from perfbenchify import perfbenchify

user_input = """ Compared to version 2.1, the new algorithm processes fifty thousand transactions per second while reducing the average latency from 12ms to 7ms. The speed improvement is a 15% increase. """

results = perfbenchify(user_input)
print(results)
# Example output:
# ['Speed improvement: 15%']

✨ Parameters

Parameter Type Description
user_input str The free‑form text containing benchmark information
llm Optional[BaseChatModel] A LangChain LLM instance to use. If omitted, the package defaults to ChatLLM7 from langchain_llm7.
api_key Optional[str] API key for ChatLLM7. If omitted, the package checks the environment variable LLM7_API_KEY; otherwise it falls back to a placeholder "None" (which triggers the free tier of LLM7).

🛠️ Using a Custom LLM

perfbenchify can work with any LabChain-compliant LLM. Below are examples for three popular providers.

OpenAI

from langchain_openai import ChatOpenAI
from perfbenchify import perfbenchify

llm = ChatOpenAI()
# Uses your OpenAI API key from environment
response = perfbenchify(user_input, llm=llm)
print(response)

Anthropic

from langchain_anthropic import ChatAnthropic
from perfbenchify import perfbenchify

llm = ChatAnthropic()
# Uses your Anthropic API key from environment
response = perfbenchify(user_input, llm=llm)
print(response)

Google Generative AI

from langchain_google_genai import ChatGoogleGenerativeAI
from perfbenchify import perfbenchify

llm = ChatGoogleGenerativeAI()
# Uses your Google API key from environment
response = perfbenchify(user_input, llm=llm)
print(response)

🔐 API Key and Rate Limits

  • Free tier of LLM7 is sufficient for most use cases.
  • To increase rate limits, supply your own key:
export LLM7_API_KEY="your_key_here"

or pass it directly:

response = perfbenchify(user_input, api_key="your_key_here")

Free keys can be obtained by registering at https://token.llm7.io/.

💡 Features

  • LLM-agnostic: Works with any LangChain LLM (OpenAI, Anthropic, Google, etc.) or the default ChatLLM7.
  • Pattern-based validation: Uses a compiled regex to guarantee the extracted metrics match a predefined format.
  • Automatic retries: Handles unreliable LLM responses by retrying until the output matches the expected pattern.
  • Easy integration: Returned data is a simple Python list of strings, ready for downstream processing or visualisation.

📄 License

MIT License – see LICENSE file for details.

📬 Support & Issues

Have questions or encountered a bug? Report them on the GitHub issue tracker: https://github.com/chigwell/perfbenchify

👤 Author

Eugene Evstafev Email: hi@eugene.plus GitHub: chigwell

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

perfbenchify-2025.12.21145744.tar.gz (5.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

perfbenchify-2025.12.21145744-py3-none-any.whl (6.6 kB view details)

Uploaded Python 3

File details

Details for the file perfbenchify-2025.12.21145744.tar.gz.

File metadata

  • Download URL: perfbenchify-2025.12.21145744.tar.gz
  • Upload date:
  • Size: 5.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.1

File hashes

Hashes for perfbenchify-2025.12.21145744.tar.gz
Algorithm Hash digest
SHA256 af2ae6f92aa5f92f061f065a2a389fd2dca9939e48edacd7819e0d919cfd9057
MD5 212859f31e4a141a9bf477079a93376d
BLAKE2b-256 f7798fe7bb27edd481d2b1c2ee3e1aa8540e8a36f43eb3e1f12fb6233fd222a9

See more details on using hashes here.

File details

Details for the file perfbenchify-2025.12.21145744-py3-none-any.whl.

File metadata

File hashes

Hashes for perfbenchify-2025.12.21145744-py3-none-any.whl
Algorithm Hash digest
SHA256 b165f10394f8cddb82e24c7c73c906e4b17962f0a644b44473b5740e0a47afc7
MD5 b30db4be52637e019b3ebbaa7d48d248
BLAKE2b-256 26290b3c3e3c833fba836c19bd9dab5620fa3334adf865541c51769ec5fa8e41

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page