Context engineering for LLMs — 87 cognitive patterns, prompt compilation, 3-tier execution, 13 export formats
Project description
mycontext-ai
Context engineering for LLMs. Build once, run anywhere, measure everything.
The Problem · Core Strengths · Quick Start · Use Cases · Patterns
The Problem
Every team building with LLMs hits the same wall:
- Prompt roulette. You tweak wording for hours. Sometimes it works, sometimes it doesn't. There's no way to know why.
- Vendor lock-in. Your prompts are written for OpenAI. Now the team wants Claude. Rewrite everything.
- No structure. System messages, user messages, constraints, output format — every developer invents their own convention.
- No measurement. Is this prompt good? Better than yesterday's? Nobody knows until production breaks.
- Reinventing the wheel. Root cause analysis, decision frameworks, comparative reasoning — proven cognitive methods exist, but teams write ad-hoc prompts from scratch every time.
- No way to prove templates help. You built a prompt template, but can you prove it produces better output than a raw question?
How It Works
mycontext-ai gives you a structured Context object that separates what the AI should know (guidance) from what it should do (directive) and what it must not do (constraints). You build the context once and export it to any LLM or framework.
Raw question
↓
[ Intelligence Layer ] — auto-selects the right cognitive pattern
↓
Three execution tiers:
├─ Static Generic — zero-cost compiled prompt (1 LLM call)
├─ Dynamic Compiled — LLM-refined prompt artifact (2-3 calls)
└─ Full Response — complete template execution (2-3 calls)
↓
Export: OpenAI │ Anthropic │ Gemini │ LangChain │ YAML │ 13 formats
↓
[ Quality Metrics ] — score, compare, improve
↓
[ CAI ] — prove the template made a measurable difference
The engine doesn't generate answers. It generates the best possible question for the LLM you're sending it to — and it can prove it.
Core Strengths
These are capabilities that exist in mycontext-ai and, to our knowledge, do not exist in any other open-source context or prompt engineering library.
1. 87 Research-Backed Cognitive Patterns
Not generic "write a poem" templates. Each pattern implements a real cognitive framework — Five Whys, fishbone analysis, Socratic method, temporal reasoning, systems archetypes, ethical frameworks — backed by 150+ peer-reviewed papers from cognitive science, decision theory, and systems thinking.
Every pattern has a structured build_context() method with validated inputs, a research-grounded directive, and constraints tuned for the method.
from mycontext.templates.free.reasoning import RootCauseAnalyzer
ctx = RootCauseAnalyzer().build_context(
problem="API response times tripled after last deployment",
depth="comprehensive",
)
2. Quality Metrics — Score Any Context on 6 Dimensions
No more guessing. QualityMetrics evaluates any context across six calibrated dimensions: clarity, completeness, specificity, relevance, structure, efficiency. Returns a numeric score, concrete issues, strengths, and actionable suggestions.
Compare two contexts to measure improvement:
from mycontext.intelligence import QualityMetrics
metrics = QualityMetrics()
score = metrics.evaluate(ctx)
print(metrics.report(score))
# → Overall: 0.87 | Clarity: 0.92 | Completeness: 0.85 | ...
# → Issues: ["Directive could be more specific about output format"]
# → Suggestions: ["Add constraints for response length"]
# Compare before/after
diff = metrics.compare(old_ctx, new_ctx)
3. Context Amplification Index (CAI) — Prove Templates Work
CAI is a quantitative metric that answers: "Did this template actually produce better LLM output than a raw prompt?"
It runs the same question through a raw prompt and a template-built context, evaluates both outputs, and computes the ratio:
CAI = templated_score / raw_score
A CAI of 1.3x means the template produced 30% better output. No opinions — numbers.
from mycontext.intelligence import ContextAmplificationIndex
cai = ContextAmplificationIndex(provider="openai")
result = cai.measure(
question="Why are API response times 3x slower after deploy?",
template_name="root_cause_analyzer",
)
print(f"CAI: {result.cai_overall:.2f}x ({result.verdict})")
# → CAI: 1.42x (significant lift)
4. Output Evaluator — Score LLM Responses, Not Just Prompts
Other tools score prompts. mycontext also scores the output. The Output Evaluator measures LLM responses across five dimensions that are distinct from prompt quality:
- Instruction Following — did it do what the context asked?
- Reasoning Depth — shallow bullet points or genuine analysis?
- Actionability — can you act on the recommendations?
- Structure Compliance — did it follow the requested format?
- Cognitive Scaffolding — did it use the reasoning framework from the template?
from mycontext.intelligence import OutputEvaluator, OutputDimension
evaluator = OutputEvaluator()
score = evaluator.evaluate(ctx, llm_response)
print(f"Output quality: {score.overall:.2f}")
print(f"Reasoning depth: {score.dimensions[OutputDimension.REASONING_DEPTH]:.2f}")
5. Template Integrator Agent — Fuse Multiple Patterns Into One
When a question needs multiple cognitive methods (e.g., root cause and scenario planning and stakeholder analysis), the Template Integrator doesn't just concatenate them. It uses an LLM to intelligently merge methodologies from multiple templates into a single unified context — one role, one set of rules, one directive.
from mycontext.intelligence import TemplateIntegratorAgent
integrator = TemplateIntegratorAgent()
result = integrator.suggest_and_integrate(
"Revenue dropped 40% — what happened, what are the scenarios, who's affected?",
provider="openai",
)
ctx = result.to_context()
ctx.execute(provider="openai")
6. Chain Orchestration Agent — Auto-Build Multi-Step Workflows
Complex questions need multiple reasoning steps. The Chain Orchestration Agent analyzes your question, selects and orders patterns from the full catalog, and generates the build_context() parameters for each step — automatically.
from mycontext.intelligence import build_workflow_chain
result = build_workflow_chain(
"Outage last week caused churn spike. What happened, root cause, and recovery plan?",
provider="openai",
)
print(result.chain) # ['temporal_sequence_analyzer', 'root_cause_analyzer', 'future_scenario_planner']
print(result.chain_params) # auto-generated build_context params for each step
7. Intelligent Pattern Suggestion — Keyword, LLM, or Hybrid
Don't know which pattern fits? suggest_patterns() maps your question to the best patterns using keyword matching, LLM reasoning, or both:
from mycontext.intelligence import suggest_patterns
result = suggest_patterns(
"Why did revenue drop? Timeline, root cause, and what to do next.",
mode="hybrid",
llm_provider="openai",
suggest_chain=True,
)
print(result.suggested_chain)
# → ['temporal_sequence_analyzer', 'root_cause_analyzer', 'future_scenario_planner']
print(result.to_markdown())
8. Context Generator — Build a Full Context from Role + Goal
Provide a role and a goal. An LLM generates everything else: behavioral rules, communication style, expertise areas, reasoning strategy, few-shot examples, output schema, and guard rails. Returns a fully assembled Context ready to execute.
from mycontext.intelligence import generate_context
result = generate_context(
role="Senior fraud analyst at a tier-1 investment bank",
goal="Detect suspicious transaction patterns with a low false-positive rate",
task="Analyze this batch of 50 transactions for fraud signals",
provider="openai",
)
# Inspect what was generated
print(result.generation_meta) # rules, style, examples, output_schema, ...
# Assemble the structured prompt
print(result.assemble())
# Or execute directly
response = result.execute(provider="openai")
No manual prompt engineering. No template hunting. Describe your role and what you're optimising for — the SDK does the rest.
9. Auto-Transform Any Question → Perfect Context
One call. No pattern selection needed. The Transformation Engine analyzes your input (type, complexity, domain, key concepts) and builds the right context automatically:
from mycontext.intelligence import transform
ctx = transform("Should we migrate to microservices? Compare tradeoffs.")
# Engine detects: comparison + decision → selects ComparativeAnalyzer
print(ctx.to_markdown())
10. Blueprint — Multi-Component Context Architecture
For production applications that need more than a single template. Blueprints orchestrate multiple components (guidance, knowledge, reasoning) with token budget management and strategy-based optimization (speed / quality / cost / balanced):
from mycontext.structure import Blueprint
from mycontext.foundation import Guidance
blueprint = Blueprint(
name="research_assistant",
guidance=Guidance(role="Expert research analyst"),
directive_template="Research and explain: {topic}",
token_budget=4000,
optimization="balanced",
)
ctx = blueprint.build(topic="Quantum computing advances in 2025")
11. 13 Export Formats — True Vendor Neutrality
Build once, export everywhere. One context works with every LLM and framework:
ctx.to_openai() # OpenAI Chat API
ctx.to_anthropic() # Claude
ctx.to_google() # Gemini
ctx.to_langchain() # LangChain messages
ctx.to_llamaindex() # LlamaIndex
ctx.to_crewai() # CrewAI
ctx.to_autogen() # AutoGen
ctx.to_yaml() # Portable config
ctx.to_json() # JSON
ctx.to_xml() # XML
ctx.to_markdown() # Human-readable
ctx.to_messages() # Universal message list
ctx.to_dict() # Python dict
Plus dedicated integration helpers for LangChain, LlamaIndex, CrewAI, AutoGen, DSPy, Semantic Kernel, and Google ADK.
12. Agent Skills with Quality Gates
Define reusable skills as SKILL.md files. Fuse them with cognitive patterns. Gate execution on quality — if the generated context scores below threshold, it blocks before wasting an API call:
from mycontext.skills import SkillRunner, improvement_report, suggested_edits
runner = SkillRunner()
result = runner.run(
Path("skills/compare_options"),
task="Compare microservices vs monolith",
quality_threshold=0.7,
)
print(f"Quality: {result.quality_score.overall}, Gated: {result.gated}")
# Get concrete improvement suggestions for the skill
print(improvement_report(result))
for edit in suggested_edits(result):
print(f" → {edit}")
Skills can declare pattern: comparative_analyzer in frontmatter — the runner fuses the skill body with that cognitive pattern automatically.
13. Template Benchmarking with CAI
Automated test suites for cognitive templates. Load YAML test cases, run templates through questions, evaluate output quality, and compute CAI scores — in CI or from the CLI:
python -m mycontext.benchmark_cli run --template diagnostic_root_cause_analyzer
python -m mycontext.benchmark_cli run-all --output results.json
14. Generic Prompts — Zero-Cost Cognitive Scaffolding
Every template carries a hand-crafted GENERIC_PROMPT (~600-1200 chars) that distills its core methodology into a self-contained prompt. No LLM call needed — just string substitution.
Two modes per template: generic (fast, lightweight) or full (rich, structured):
from mycontext.templates.free.reasoning import RootCauseAnalyzer
# Generic mode — zero-cost prompt
rca = RootCauseAnalyzer()
prompt = rca.generic_prompt(problem="Server crashes during peak hours")
print(len(prompt)) # ~946 chars
# Or get any template's generic prompt with automatic fallback
from mycontext.intelligence import get_generic_prompt_for
prompt = get_generic_prompt_for("root_cause_analyzer", "Why did sales drop?")
15. Prompt Compilation Pipeline — Reusable Prompt Artifacts
Instead of executing templates and getting responses, compile them into reusable, provider-agnostic prompts that can be executed on any LLM:
from mycontext.intelligence import smart_prompt
composed = smart_prompt("Should we migrate to microservices?", provider="openai")
print(composed.to_string()) # reusable prompt artifact
response = composed.execute() # or execute directly
16. Static Generic Compilation — Maximum Cost Efficiency
Compile generic prompts from multiple templates with zero LLM calls for compilation. The only LLM call is the complexity assessment:
from mycontext.intelligence import smart_generic_prompt
# 1 LLM call (assessment) + 0 (compilation) → optimized prompt
composed = smart_generic_prompt(
"Our team has communication breakdowns. Diagnose and solve.",
provider="openai"
)
print(composed.source_templates) # e.g. ['root_cause_analyzer']
response = composed.execute() # execute with 1 more call
17. Three-Tier Execution Model
Choose your cost/quality tradeoff:
| Tier | Function | LLM Calls | Validated Quality |
|---|---|---|---|
| Static Generic | smart_generic_prompt() |
1 + 0 | ~95% avg |
| Dynamic Compiled | smart_prompt() |
1 + 1-3 | ~95% avg |
| Full Response | smart_execute() |
1 + 1-2 | ~96% avg |
All three tiers validated across 6 sprints of controlled experimentation (60+ experimental runs, 10 diverse questions, 5 evaluation dimensions).
18. Complexity Router — Automatic Template Selection
assess_complexity() classifies your question and decides the optimal approach before running anything:
from mycontext.intelligence import assess_complexity, smart_execute
# Automatic: routes to raw, single template, or integrated
response, meta = smart_execute("Why did churn spike 40%?", provider="openai")
print(meta['mode']) # 'single_template' or 'integrated'
print(meta['templates_used']) # ['root_cause_analyzer']
19. Built-In Retry & Timeout
All LLM calls include automatic retry with exponential backoff (rate limits, timeouts, server errors) and configurable timeout — production-ready out of the box.
20. Async Execution — True Non-Blocking LLM Calls
Every context can be executed asynchronously. Run multiple independent LLM calls concurrently with asyncio.gather() — no threads, no blocking, no event-loop stalls:
import asyncio
from mycontext import Context
from mycontext.foundation import Guidance, Directive
async def main():
ctx_a = Context(guidance=Guidance(role="Risk analyst"), directive=Directive("Assess risks"))
ctx_b = Context(guidance=Guidance(role="Strategist"), directive=Directive("Plan response"))
ctx_c = Context(guidance=Guidance(role="Communicator"), directive=Directive("Draft message"))
# All three LLM calls run concurrently — total latency = slowest single call
results = await asyncio.gather(
ctx_a.aexecute(provider="openai"),
ctx_b.aexecute(provider="openai"),
ctx_c.aexecute(provider="openai"),
)
for r in results:
print(r.response)
asyncio.run(main())
Uses litellm.acompletion under the hood with full cache, retry, and tracing parity with the sync path.
21. Token-Budget Context Assembly
Assemble any context within a precise token budget for a specific model. Sections are included in priority order (directive → guidance → constraints → knowledge) and the last fitting section is trimmed to fit — guaranteed ≤ max_tokens as measured by tiktoken:
# Assemble within a strict 8k token budget for the target model
prompt = ctx.assemble_for_model(model="gpt-4o-mini", max_tokens=8000)
# Full budget for Claude's 200k window
prompt = ctx.assemble_for_model(model="claude-3-5-sonnet", max_tokens=180000)
# No budget (default) — same as ctx.assemble()
prompt = ctx.assemble_for_model()
Replaces the previous character-based truncation. Prevents context overflow and stops artificially discarding sections that would fit.
22. Validated Structured Output Parsing
All intelligence-layer LLM responses are now validated through Pydantic v2 schemas before use. When the optional instructor package is installed, the LLM is constrained to produce valid JSON via function-calling mode with automatic retry on validation failure (~98% parse success rate vs ~70% for regex alone):
pip install instructor # optional — enables structured LLM output
Falls back to Pydantic-validated JSON parsing → original regex parser without instructor. No behaviour change if not installed.
At a Glance
| Capability | mycontext-ai | Typical prompt libraries |
|---|---|---|
| Cognitive patterns | 85 research-backed (16 free + 69 enterprise) | 10-20 generic templates |
| Context generator | Role + goal → full context via LLM | None |
| Structured prompt assembly | 9-section research-backed ordering | None |
| Thinking strategies | 5 named strategies (CoT, ToT, Self-Reflection, ...) | None |
| Few-shot calibration | Typed examples field, auto-positioned | Manual |
| Generic prompts (zero-cost) | 87 pre-authored, compilable | None |
| Prompt compilation pipeline | Static + dynamic + full (3 tiers) | None |
| Complexity router | Auto-selects optimal approach per question | None |
| Context quality scoring | 6 dimensions + issues + suggestions | None |
| Output quality scoring | 5 dimensions (separate from prompt quality) | None |
| Template effectiveness proof | CAI (quantitative lift measurement) | None |
| Pattern suggestion | Keyword + LLM + hybrid modes | Manual selection |
| Multi-template fusion | Intelligent merge (not concatenation) | None |
| Workflow chain generation | Auto-select + auto-parameterize | Manual |
| Export formats | 13 (OpenAI, Anthropic, LangChain, YAML, ...) | 1-2 |
| Framework integrations | 7 (LangChain, CrewAI, AutoGen, DSPy, ...) | 0-1 |
| Agent Skills + quality gate | Pattern-fused skills with threshold gating | None |
| Async execution | aexecute / agenerate — true non-blocking via litellm.acompletion |
DIY |
| Token-budget assembly | assemble_for_model(model, max_tokens) — tiktoken-accurate |
None |
| Validated structured output | Pydantic v2 schemas + optional instructor integration | None |
| Retry + timeout | Built-in exponential backoff | DIY |
| Research citations | 150+ peer-reviewed papers | 0-5 |
Quick Start
pip install mycontext-ai
# Add LLM execution (recommended)
pip install litellm
from mycontext import Context, Guidance, Directive, Constraints
# Manual — full control over every field
ctx = Context(
guidance=Guidance(
role="Senior security reviewer",
goal="Find every exploitable vulnerability and give concrete fixes",
rules=["Flag every injection risk", "Always include a severity rating"],
style="concise, actionable",
),
directive=Directive(content="Review this API for auth and input validation."),
thinking_strategy="verify", # answer, then self-critique
examples=[
{"input": "session.permanent = True", "output": "Medium — sessions never expire"},
],
constraints=Constraints(
must_include=["severity", "fix example"],
output_schema=[{"name": "finding", "type": "str"}, {"name": "severity", "type": "str"}],
),
research_flow=True, # structured 9-section prompt
)
# Export to any LLM
ctx.to_openai() # → OpenAI messages
ctx.to_anthropic() # → Claude format
ctx.to_langchain() # → LangChain messages
# Or execute directly (requires litellm)
result = ctx.execute(provider="openai")
Or generate the whole thing from role + goal:
from mycontext.intelligence import generate_context
result = generate_context(
role="Senior security reviewer",
goal="Find every exploitable vulnerability and give concrete fixes",
task="Review this authentication middleware",
provider="openai",
)
print(result.assemble()) # fully populated 9-section prompt
All providers route through LiteLLM, giving you access to 100+ models. You can also register custom providers (e.g., Ollama for local models).
Use Cases
Chain patterns for complex analysis
Strategic questions need multiple reasoning steps. Chain patterns so each stage feeds the next:
# Enterprise templates — requires license (see Enterprise Patterns section)
from mycontext.templates.enterprise.temporal import TemporalSequenceAnalyzer
from mycontext.templates.free.reasoning import RootCauseAnalyzer
from mycontext.templates.enterprise.synthesis import HolisticIntegrator
# Stage 1: Timeline
ctx1 = TemporalSequenceAnalyzer().build_context(
events="Q1: Support tickets doubled. Q2: Competitor launched. Q3: Complaints up 40%.",
time_span="12 months",
)
# Stage 2: Root cause (fed by Stage 1 output)
ctx2 = RootCauseAnalyzer().build_context(
problem="Customer satisfaction collapse",
symptoms=ctx1.directive.content[:2500],
)
# Stage 3: Synthesis
ctx3 = HolisticIntegrator().build_context(
topic="Recovery strategy",
perspectives=f"Timeline: {ctx1.directive.content[:600]}\nRCA: {ctx2.directive.content[:600]}",
)
result = ctx3.execute(provider="openai")
Drop into any orchestrator
mycontext contexts work as tools inside LangChain, CrewAI, smolagents, AutoGen, Semantic Kernel, and Google ADK:
from mycontext.intelligence import transform
from mycontext.integrations import LangChainHelper
ctx = transform("What are the top 3 risks for this launch?")
messages = LangChainHelper.to_messages(ctx)
# → Use in your LangChain chain or agent
Integration helpers are available for all 7 frameworks out of the box.
Enforce structured output
from mycontext.utils.structured_output import output_format
instruction = output_format("json", schema={"summary": "str", "risks": "list", "recommendation": "str"})
ctx = Context(directive=Directive(content=f"Analyze this proposal.\n\n{instruction}"))
87 Cognitive Patterns
Free Patterns (16)
Included in every install. Production-ready for analysis, decision-making, reasoning, and communication.
| Pattern | What it does |
|---|---|
| RootCauseAnalyzer | Five Whys + fishbone + systematic diagnosis |
| DataAnalyzer | Data description → analysis plan → insights |
| QuestionAnalyzer | Decompose complex questions into structured inquiry |
| StepByStepReasoner | Chain-of-thought with explicit reasoning steps |
| HypothesisGenerator | Generate and evaluate competing hypotheses |
| ScenarioPlanner | Future scenarios with probability assessment |
| RiskAssessor | Risk identification, scoring, and mitigation |
| Brainstormer | Structured ideation with divergent/convergent phases |
| CodeReviewer | Security, performance, and maintainability review |
| TechnicalTranslator | Translate technical content for different audiences |
| AudienceAdapter | Adapt message for specific audience and context |
| SocraticQuestioner | Guided inquiry through Socratic method |
| SynthesisBuilder | Integrate multiple sources into coherent synthesis |
| StakeholderMapper | Map stakeholders, interests, and influence |
| ConflictResolver | Mediate conflicts by identifying interests and common ground |
| IntentRecognizer | Identify core intent, goals, and motivations behind a statement |
Enterprise Patterns (+71)
Advanced patterns for temporal reasoning, diagnostics, systems thinking, ethical analysis, metacognition, learning science, and cross-domain synthesis. Enterprise patterns require a valid license key. Contact us to obtain a license.
The SDK will warn you when an enterprise template is accessed without a license and automatically suggest free alternatives:
import mycontext
# Without license — SDK warns and falls back to best free template
from mycontext.intelligence import smart_execute
response, meta = smart_execute("Analyze this...", include_enterprise=False)
# → UserWarning: Template 'causal_reasoner' requires an enterprise license.
# Free alternatives (16 templates) are available.
# With license — full access
mycontext.activate_license("MC-ENT-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX")
response, meta = smart_execute("Analyze this...", include_enterprise=True)
Categories: Metacognition · Ethical Reasoning · Systems Thinking · Learning & Knowledge Building · Evaluation & Assessment · Temporal Reasoning · Diagnostic & Troubleshooting · Synthesis & Integration · Advanced Decision · Advanced Problem Solving · Advanced Planning · Advanced Analysis · Advanced Reasoning · Advanced Creative · Advanced Communication · Advanced Specialized
Web Application
mycontext also ships as a full-featured web application — a context engineering workbench with a visual pattern library, chain builder, and an AI-powered Context Copilot that guides you through building and refining contexts step by step. The web app is available separately from the SDK.
Installation
# Core SDK (includes 16 free patterns, intelligence layer, quality metrics)
pip install mycontext-ai
# Add LLM execution support (recommended)
pip install mycontext-ai litellm
# Optional: provider-specific SDKs
pip install "mycontext-ai[openai]" # OpenAI SDK
pip install "mycontext-ai[anthropic]" # Anthropic SDK
pip install "mycontext-ai[google]" # Google GenAI SDK
pip install "mycontext-ai[all]" # All provider SDKs
What This Is (and Isn't)
mycontext-ai is a context engineering library. It structures and transforms your questions into high-quality prompts using research-backed cognitive patterns. It measures prompt quality and output quality. It proves templates work with quantitative metrics. It exports to any LLM format.
It is not a prompt template string library. It is not an LLM wrapper. It is not an agent framework. It works with your existing agent framework (LangChain, CrewAI, AutoGen, etc.) by giving it better inputs.
The core insight: the quality of an LLM's output is bounded by the quality of its input. mycontext engineers that input — and proves it.
License
MIT. Free edition includes 16 patterns and the full intelligence layer. Enterprise edition (+71 advanced patterns) requires a license key.
The quality of an LLM's output is bounded by the quality of its input.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mycontext_ai-0.6.0.tar.gz.
File metadata
- Download URL: mycontext_ai-0.6.0.tar.gz
- Upload date:
- Size: 51.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f100f4c19abcebd5f2405a091d6dde3e1a9f69cf105c9ab4b63ad8a2051b7b9d
|
|
| MD5 |
f7bf0e062aabbf7626fddfb1d311c98b
|
|
| BLAKE2b-256 |
b04cebc1e200f1a08cb928ade785af2f7ec5455cfed50d16297c8d1ad9684a93
|
File details
Details for the file mycontext_ai-0.6.0-py3-none-any.whl.
File metadata
- Download URL: mycontext_ai-0.6.0-py3-none-any.whl
- Upload date:
- Size: 220.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
748d2b95a0a5e603e0cc6881d86182be532755a59da7bb8ce2bf02a329a266a5
|
|
| MD5 |
7b5df52923712496bd5d6a445ec15efe
|
|
| BLAKE2b-256 |
4dab1c64b245dfc3c48ea6b68bc426d28923c9b0cbf4844e5b3df28bf1e930d6
|