Skip to main content

A new package designed to facilitate the extraction of structured summaries or key information from user inputs related to Unix Fourth Edition. It processes textual prompts about Unix concepts, comman

Project description

unix4summary

PyPI version License: MIT Downloads LinkedIn

unix4summary is a lightweight Python package that extracts structured summaries and key information from textual prompts related to Unix Fourth Edition.
It parses user inputs—such as command descriptions, system behaviours, or feature overviews—and returns concise, well‑formatted details (e.g., command syntax, explanations, or expected outputs).
The tool relies on regular‑expression matching and a retry mechanism to guarantee consistent return formats, making it ideal for quick reference or documentation generation without handling multimedia content.


Table of Contents


Installation

pip install unix4summary

Basic Usage

from unix4summary import unix4summary

user_input = """
Explain the `exec` system call in Unix 4th Edition.
Provide the syntax and typical use cases.
"""

# Use the default LLM7 model
summary = unix4summary(user_input)

print(summary)

Output (example)

[
  "- Syntax: execve(const char *pathname, char *const argv[], char *const envp[])",
  "- Purpose: Replaces the current process image with a new process image.",
  "- Typical Usage: Executing a shell program from a custom script."
]

Custom Language Model

unix4summary uses ChatLLM7 (from langchain_llm7) by default.
You can provide any LangChain BaseChatModel instance to switch providers.

OpenAI

from langchain_openai import ChatOpenAI
from unix4summary import unix4summary

llm = ChatOpenAI()

response = unix4summary(user_input, llm=llm)
print(response)

Anthropic

from langchain_anthropic import ChatAnthropic
from unix4summary import unix4summary

llm = ChatAnthropic()

response = unix4summary(user_input, llm=llm)
print(response)

Google Gemini

from langchain_google_genai import ChatGoogleGenerativeAI
from unix4summary import unix4summary

llm = ChatGoogleGenerativeAI()

response = unix4summary(user_input, llm=llm)
print(response)

Tip: If you prefer to keep using the default ChatLLM7 but need higher rate limits, set an API key via the environment variable LLM7_API_KEY or pass it directly:

response = unix4summary(user_input, api_key="YOUR_LLM7_TOKEN")

You can obtain a free API key at https://token.llm7.io/.


Configuration Parameters

Parameter Type Description
user_input str Text to process (Unix 4th Edition related command or concept).
llm Optional[BaseChatModel] LangChain language‑model instance. If omitted, ChatLLM7 is used.
api_key Optional[str] API key for LLM7; read from LLM7_API_KEY env variable by default.

Contributing

Issues and pull requests are welcome!


License

MIT © Eugene Evstafev


Author

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unix4summary-2025.12.21115644.tar.gz (6.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

unix4summary-2025.12.21115644-py3-none-any.whl (6.6 kB view details)

Uploaded Python 3

File details

Details for the file unix4summary-2025.12.21115644.tar.gz.

File metadata

  • Download URL: unix4summary-2025.12.21115644.tar.gz
  • Upload date:
  • Size: 6.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for unix4summary-2025.12.21115644.tar.gz
Algorithm Hash digest
SHA256 96c7952496a7215d96959d97ceeca67e26c3544563fb951603139643b24263fd
MD5 3a2777578e91b6c5352bd01a6e3b9ebc
BLAKE2b-256 16bba07347f8bf8386775ad61e44ede5f695c7ea72f55ba3cefba645b6e653f6

See more details on using hashes here.

File details

Details for the file unix4summary-2025.12.21115644-py3-none-any.whl.

File metadata

File hashes

Hashes for unix4summary-2025.12.21115644-py3-none-any.whl
Algorithm Hash digest
SHA256 80855b92bae0fbadcfa9845df623d2acddcc6fe190750b31d075769d3ab08d27
MD5 d8b25a99789711bfcc5e22c01eb4baf2
BLAKE2b-256 f459df5e1c6a729beaf7977fb5e0a4708f6ee87139a122f64eb91116ebef6ab7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page