CrewAI tool for the Ejentum Reasoning Harness. Each call retrieves a task-matched cognitive operation engineered in two layers: a natural-language procedure plus an executable reasoning topology (graph DAG with gates, parallel branches, and meta-cognitive exits). Injected before the LLM step to harden reasoning against decay on complex tasks and long agent loops.
Project description
crewai-ejentum
A CrewAI tool that retrieves a task-matched cognitive operation from the Ejentum Reasoning Harness and injects it into the agent's reasoning before it produces an answer.
Each operation in the Ejentum library (679 of them, organized across four harnesses) is engineered in two layers:
- a natural-language procedure the model can read, naming the steps to take and the failure pattern to refuse, and
- an executable reasoning topology: a graph-shaped plan over those steps. The plan names explicit decision points where the model branches, parallel branches that run and rejoin, bounded loops that run until convergence, named meta-cognitive moments where the model is asked to stop, look at its own working, and re-enter at a specific step, and escape paths for when the prescribed plan stops fitting the task at hand.
The natural-language layer tells the model what to do. The topology layer pins down how those steps connect: where to decide, where to loop, where to stop and look at itself. Together they act as a persistent attention anchor that survives long context windows and multi-turn execution chains, which is precisely where a model's own reasoning template typically decays.
Installation
pip install crewai-ejentum
Configuration
Get a free Ejentum API key (100 calls, no card required) at https://ejentum.com/pricing and set it in your environment:
export EJENTUM_API_KEY="zpka_..."
Usage
from crewai import Agent, Task, Crew
from crewai_ejentum import EjentumHarnessTool
harness = EjentumHarnessTool()
architect = Agent(
role="Senior architect",
goal="Evaluate technical decisions honestly",
backstory="You are pragmatic and push back on sunk-cost framings.",
tools=[harness],
)
task = Task(
description=(
"We've spent three months on the GraphQL gateway. It's mostly done. "
"Should we keep going or pivot to REST? "
"Call the Ejentum harness with mode='anti-deception' before answering."
),
agent=architect,
expected_output="A recommendation that separates past spending from prospective evaluation.",
)
Crew(agents=[architect], tasks=[task]).kickoff()
The four harnesses
Pick the mode that matches what the agent is about to do:
| Mode | Best for | Library size |
|---|---|---|
reasoning |
Analytical, diagnostic, planning, multi-step tasks spanning abstraction, time, causality, simulation, spatial, and metacognition | 311 operations |
code |
Code generation, refactoring, review, and debugging across the software-engineering layer | 128 operations |
anti-deception |
Prompts that pressure the agent to validate, certify, or soften an honest assessment, spanning sycophancy, hallucination, deception, adversarial framing, judgment, and executive control | 139 operations |
memory |
Sharpening an observation already formed about cross-turn drift across the perception layer; filter-oriented, not write-oriented | 101 operations |
What an injection looks like
A real reasoning mode response on the query investigate why our nightly ETL job has started failing intermittently over the past two weeks; nothing in the code or schema has changed:
[NEGATIVE GATE]
The server's response time was accepted as average, despite a suspicious
rhythm break in its timing pattern.
[PROCEDURE]
Step 1: Establish baseline timing profiles by extracting historical
durations and intervals for each event type. Step 2: Compare each observed
timing against its baseline and compute deviation magnitude. Step 3:
Classify anomalies as too fast, too slow, too early, or too late, and rank
by severity. ... Step 5: If deviation exceeds two standard deviations,
probe root cause by tracing upstream dependencies. ...
[REASONING TOPOLOGY]
S1:durations → FIXED_POINT[baselines] → N{dismiss_timing_deviations_
without_investigation} → for_each: S2:compare → S3:deviation →
G1{>2sigma?} --yes→ S4:classify → S5:probe_cause → FLAG → continue --no→
S6:validate → continue → all_checked → OUT:anomaly_report
[TARGET PATTERN]
Establish timing baselines by extracting historical response intervals.
Compare current server response time to this baseline. ...
[FALSIFICATION TEST]
If no event timing is flagged as suspiciously fast or slow relative to
baseline, temporal anomaly detection was not active.
Amplify: timing baseline comparison; anomaly classification; security
context elevation
Suppress: average timing acceptance; outlier normalization
The agent reads both the natural-language [PROCEDURE] and the graph-logic [REASONING TOPOLOGY] before generating its user-facing answer. The bracketed labels are instructions to the agent, not content to display; the user sees a naturally-phrased answer shaped by the injection.
API reference
EjentumHarnessTool(api_url: str = "...", timeout_seconds: float = 10.0)
| Field | Default | Description |
|---|---|---|
api_url |
https://ejentum-main-ab125c3.zuplo.app/logicv1/ |
Override only if you self-host the Ejentum Logic API gateway. |
timeout_seconds |
10.0 |
Per-call HTTP timeout. |
EJENTUM_API_KEY is read from the environment at call time.
The tool's _run accepts two arguments:
query(string, required): a 1-2 sentence description of the task the agent is about to work on. Formode='memory', format as"I noticed [X]. This might mean [Y]. Sharpen: [Z].".mode(string, required): one ofreasoning,code,anti-deception,memory.
Returns the scaffold string. Errors are returned as human-readable strings (the tool never raises so the agent never crashes the run).
Compatibility
- Python 3.10+
crewai>=0.40.0requests>=2.31.0
Resources
- Ejentum homepage: https://ejentum.com
- Free tier and pricing: https://ejentum.com/pricing
- API reference: https://ejentum.com/docs/api_reference
- "Why LLM Agents Fail" essay: https://ejentum.com/blog/why-llm-agents-fail
- "Under Pressure" research paper: https://doi.org/10.5281/zenodo.19392715
- CrewAI documentation: https://docs.crewai.com
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file crewai_ejentum-0.1.0.tar.gz.
File metadata
- Download URL: crewai_ejentum-0.1.0.tar.gz
- Upload date:
- Size: 8.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9a1188031d6b8973a78407d3a207ec360a0ee69fc61178890cb9d27c16bed379
|
|
| MD5 |
ea70bca02c99f08f6d4518b22966e7db
|
|
| BLAKE2b-256 |
8c4977fe1bda95277110f8d562db27cc10c9890ab4d7b9b9dee17b64f0ed2f5a
|
Provenance
The following attestation bundles were made for crewai_ejentum-0.1.0.tar.gz:
Publisher:
release.yml on ejentum/crewai-ejentum
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
crewai_ejentum-0.1.0.tar.gz -
Subject digest:
9a1188031d6b8973a78407d3a207ec360a0ee69fc61178890cb9d27c16bed379 - Sigstore transparency entry: 1512186298
- Sigstore integration time:
-
Permalink:
ejentum/crewai-ejentum@cb1b77219dc83325e78a7be792b1f532c537095c -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/ejentum
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@cb1b77219dc83325e78a7be792b1f532c537095c -
Trigger Event:
push
-
Statement type:
File details
Details for the file crewai_ejentum-0.1.0-py3-none-any.whl.
File metadata
- Download URL: crewai_ejentum-0.1.0-py3-none-any.whl
- Upload date:
- Size: 9.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2cd603ac2542b61ef93dddff0574c3263887e1c8364876130eb49f4c65cc79ad
|
|
| MD5 |
e6c4d6b5a74f444083ee0cdfcda93d3a
|
|
| BLAKE2b-256 |
5ec98b84d92e8feda9d65e08ab36e2bf04d600985bb490fa4d503255656fd3db
|
Provenance
The following attestation bundles were made for crewai_ejentum-0.1.0-py3-none-any.whl:
Publisher:
release.yml on ejentum/crewai-ejentum
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
crewai_ejentum-0.1.0-py3-none-any.whl -
Subject digest:
2cd603ac2542b61ef93dddff0574c3263887e1c8364876130eb49f4c65cc79ad - Sigstore transparency entry: 1512186415
- Sigstore integration time:
-
Permalink:
ejentum/crewai-ejentum@cb1b77219dc83325e78a7be792b1f532c537095c -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/ejentum
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@cb1b77219dc83325e78a7be792b1f532c537095c -
Trigger Event:
push
-
Statement type: