Skip to main content

Context-aware, LLM-powered rolling conversation summarizer

Project description

================================================================================

CONVERSATION SUMMARIZER
================================================================================

Progressive, context-aware conversation summarizer powered by Azure LLM.
Automatically compresses long conversations into structured summaries
while preserving user intent, key decisions, and action items.

================================================================================
QUICK START
================================================================================

1. INSTALL DEPENDENCIES
──────────────────────────────────────────────────────────────────────────
pip install -r requirements.txt

Or manually:
pip install openai


2. SET UP AZURE CREDENTIALS
──────────────────────────────────────────────────────────────────────────
The module needs three environment variables to call Azure OpenAI:

AZURE_OPENAI_ENDPOINT e.g. https://my-resource.openai.azure.com/
AZURE_OPENAI_API_KEY Your Azure API key
AZURE_OPENAI_DEPLOYMENT Your model deployment name

Set them as follows:

Windows (PowerShell):
$env:AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
$env:AZURE_OPENAI_API_KEY="your-api-key-here"
$env:AZURE_OPENAI_DEPLOYMENT="your-deployment-name"

Windows (Command Prompt):
set AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
set AZURE_OPENAI_API_KEY=your-api-key-here
set AZURE_OPENAI_DEPLOYMENT=your-deployment-name

Mac/Linux:
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_API_KEY="your-api-key-here"
export AZURE_OPENAI_DEPLOYMENT="your-deployment-name"


3. RUN THE END-TO-END TEST
──────────────────────────────────────────────────────────────────────────
This test reads sample_conversation.txt, calls your Azure LLM,
and prints a formatted summary with sanity checks.

From the project root:
cd summarizer_package
python test_summarizer.py

Or from the workspace root (outer folder):
python .\summarizer_package\test_summarizer.py

Expected output:
- Loads the sample conversation
- Calls Azure LLM to summarize
- Prints summary with overview, intent, decisions, and action items
- Runs sanity checks (all should PASS)


4. RUN UNIT TESTS
──────────────────────────────────────────────────────────────────────────
Unit tests use a mock LLM and do NOT require Azure credentials.

From the project root:
python -m pytest

Or run test_conversation_summarizer.py directly:
python test_conversation_summarizer.py

================================================================================
HOW TO USE IN CODE
================================================================================

from openai import AzureOpenAI
from conversation_summarizer import ConversationSummarizer, LLMClient
import os

# 1. Wire up your Azure client
azure_client = AzureOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_key=os.environ["AZURE_OPENAI_API_KEY"],
api_version=os.environ.get("AZURE_OPENAI_API_VERSION", "2024-02-01"),
)

# 2. Create an LLMClient wrapper
llm_client = LLMClient(
client=azure_client,
model=os.environ["AZURE_OPENAI_DEPLOYMENT"]
)

# 3. Initialize the summarizer
summarizer = ConversationSummarizer(llm_client)

# 4. Progressive compression (auto-triggers when buffer hits threshold)
from conversation_summarizer import Message, Role
summarizer.add_message(Message(Role.USER, "Hello, I need help with X"))
summarizer.add_message(Message(Role.ASSISTANT, "Sure, tell me more..."))
# ... add more turns ...

# 5. Get context for the next LLM call
# (includes summary block + recent turns)
context = summarizer.get_context_messages()
context.append({"role": "user", "content": "What should we do next?"})
response = llm_client.complete(context)

# 6. Optionally summarize plain text directly
summary_text = summarizer.summarize_to_text(conversation_text)
print(summary_text)

================================================================================
FILES
================================================================================

conversation_summarizer.py
Core summarizer module. No Azure coupling—use any LLM backend
by implementing the LLMClient interface.

test_summarizer.py
End-to-end test: reads sample_conversation.txt, calls Azure LLM,
prints summary and runs sanity checks.
Requires: Azure credentials (see SETUP above).

test_conversation_summarizer.py
Unit tests for ConversationSummarizer using a mock LLM.
No Azure needed. Run with: pytest test_conversation_summarizer.py

sample_conversation.txt
Sample conversation input. Edit with your own conversation
using the format below.

requirements.txt
Python package dependencies.

README.txt
This file.

================================================================================
INPUT CONVERSATION FORMAT
================================================================================

Conversations should be plain text with a simple format:

User: What do you need?
Assistant: I need help building a leave management system.
User: What are the requirements?
Assistant: It should support multiple leave types, email notifications, ...

The summarizer treats "User:" and "Assistant:" as delimiters.
Edit sample_conversation.txt to test with your own sample.

================================================================================
CUSTOMIZATION
================================================================================

Auto-compress threshold:
ConversationSummarizer(llm_client, auto_summarize_after=30)
(compress after 30 messages instead of default)

Recent turns to keep in buffer:
ConversationSummarizer(llm_client, keep_recent_turns=5)
(retain 5 most recent messages after each compression)

Force compression:
summary = summarizer.force_compress()
(manually compress the entire buffer, regardless of size)

================================================================================

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

text_summarizer_gi-0.2.0.tar.gz (13.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

text_summarizer_gi-0.2.0-py3-none-any.whl (15.2 kB view details)

Uploaded Python 3

File details

Details for the file text_summarizer_gi-0.2.0.tar.gz.

File metadata

  • Download URL: text_summarizer_gi-0.2.0.tar.gz
  • Upload date:
  • Size: 13.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for text_summarizer_gi-0.2.0.tar.gz
Algorithm Hash digest
SHA256 07a300e30bd39c8fb1360025c2dbc8c5dd58db5de724b1b865b7f3b857ff4069
MD5 2da90030faaa60c12198da0045ab0a01
BLAKE2b-256 dfbe38145b49bb4ff4ba94e6a3529cd2f9e10a7ce6dd1a25eb9c4d3bab9f3fd5

See more details on using hashes here.

File details

Details for the file text_summarizer_gi-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for text_summarizer_gi-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 26e07a9ccfe6062ed4fb64c6a3aaeccad80a89caeedb116d941c4b69b0416b08
MD5 a0c747f24a507c27ed14b9d0e7635661
BLAKE2b-256 aca4db2b50c82a1767b5f80e035aafd6dc2814179efe7498bd2f8660c22de39b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page