Skip to main content

perf-metric-xtr extracts key performance metrics from tech announcements and outputs standardized summaries.

Project description

perf-metric-xtr

PyPI version License: MIT Downloads LinkedIn

Extract and structure key performance metrics from technology announcements.

Overview

A new package that takes raw text input about product launches and returns a standardized output highlighting specific performance improvements.

Installation

pip install perf_metric_xtr

Example of Usage

from perf_metric_xtr import perf_metric_xtr

response = perf_metric_xtr(user_input="Moore Threads unveils next-gen gaming GPU with 15x performance, 50x ray tracing")
print(response)

Parameters

  • user_input (str): the user input text to process
  • llm (Optional[BaseChatModel]): the langchain llm instance to use, if not provided the default ChatLLM7 will be used
  • api_key (Optional[str]): the api key for llm7, if not provided Also, you can safely pass your own llm instance (based on https://docs.langchain.com/docs/custom-forwards-llms) if you want to use another LLM, via passing it like perf_metric_xtr(user_input, llm=your_llm_instance), for example to use the openai https://docs$langchain_openai
from langchain_openai import ChatOpenAI
from perf_metric_xtr import perf_metric_xtr

llm = ChatOpenAI()
response = perf_metric_xtr(user_input, llm=llm)

or for example to use the anthropic https://docs.langchain.com/forward/docs/anthropic

from langchain_anthropic import ChatAnthropic
from perf_metric_xtr import perf_metric_xtr

llm = ChatAnthropic()
response = perf_metric_xtr(user_input, llm=llm)

or google https://docs.langchain.com/docs/google-genai-forward

from langchain_google_genai import ChatGoogleGenerativeAI
from perf_metric_xtr import perf_metric_xtr

llm = ChatGoogleGenerativeAI()
response = perf_metric_xtr(user_input, llm=llm)

Rate Limits

The default rate limits for LLM7 free tier are sufficient for most use cases of this package. If you want higher rate limits for LLM7 you can pass your own api_key via environment variable LLM7_API_KEY or via passing it directly like perf_metric_xtr(user_input, api_key="your_api_key"). You can get a free api key by registering at https://token.llm7.io/

GitHub Issues

https://github.com/chigwell/perf-metric-xtr

Author

Eugene Evstafev hi@euegne.plus

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

perf_metric_xtr-2025.12.21195642.tar.gz (4.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

perf_metric_xtr-2025.12.21195642-py3-none-any.whl (5.4 kB view details)

Uploaded Python 3

File details

Details for the file perf_metric_xtr-2025.12.21195642.tar.gz.

File metadata

File hashes

Hashes for perf_metric_xtr-2025.12.21195642.tar.gz
Algorithm Hash digest
SHA256 65000f5d93e0c14376df247213dbd063fc30b338d79af3a46a93b4eb52cb2b00
MD5 6c3664afde6ca60a983e46b76f558348
BLAKE2b-256 fc7b2346477dc642b7b43c5abc9571802bf7f745f60899a7f5d85328b55b8c03

See more details on using hashes here.

File details

Details for the file perf_metric_xtr-2025.12.21195642-py3-none-any.whl.

File metadata

File hashes

Hashes for perf_metric_xtr-2025.12.21195642-py3-none-any.whl
Algorithm Hash digest
SHA256 715f4aa8ccb2b5af5140da89d30e4edce384728e6ac8978d590e7d9584cbbdcf
MD5 538cc5de31e3c2e593321ccdc4fbd564
BLAKE2b-256 b266f2cf5b3df2864a4e14e1c0a4f533e844669ff12f570a5cefbeca08ce76d7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page