Making agents cuter
Project description
llmoji
[!WARNING] Privacy notice for upgraders. Versions of
llmojibefore 1.2.0 had a privacy issue; I have changed the upload method to mitigate it. Please upgrade (pip install --upgrade llmoji) to upload.アップグレードされる方へのプライバシーに関するお知らせ:
llmojiのバージョン1.2.0より前の版にはプライバシー上の問題がありました。この問題を軽減するため、アップロード方法を変更いたしました。アップロードを行うには、最新版へのアップグレード(pip install --upgrade llmoji)をお願いいたします。
Llmoji is a small CLI that makes your agents cuter. (´-ω-`)
Llmoji configures your agent to start each message with a kaomoji. It locally saves them, and provides tools to summarize and upload the aggregated meaning per face to contribute to a shared database.
The companion research repo llmoji-study is where this data is processed.
There are three main commands:
llmoji install <provider>: writes hooks to prompt for and record kaomojillmoji analyze: scrape and aggregate your logsllmoji upload --target {hf,email}: ship the bundle (HF: pushes a per-submission branch on the dataset for the maintainer to review; email: tarball)
analyze needs an llm to synthesize your logs. By default, it uses Anthropic Haiku and reads $ANTHROPIC_API_KEY; --backend openai uses GPT-5.4 mini and reads $OPENAI_API_KEY; --backend local runs against any OpenAI-compatible endpoint (Ollama, vLLM, etc.) and needs --base-url and --model. upload --target hf needs your HuggingFace token plus an upload password posted on the dataset card; please see SECURITY.md for the threat model. The email path tarballs the bundle and has you attach it manually.
Reporting issues
If you notice any errors while using the program, please update to the most recent version and reinstall the hooks. If it still persists, please open an issue. This project is a work in progress and I am actively finding and fixing bugs.
本プログラムにおいて何らかのエラーが発生し、ご迷惑をおかけしましたことを深くお詫び申し上げます。恐れ入りますが、プログラムを最新バージョンに更新し、コネクタを再インストールしていただけますでしょうか。それでも問題が解決しない場合は、Issue(課題)を起票してお知らせください。本プロジェクトは現在も開発が進行中であり、バグの特定と修正に積極的に取り組んでおります。
Purpose
The shared HuggingFace dataset at a9lim/llmoji collects kaomoji counts and a single summarized description per face per source model, across many users' coding agents. The companion repo processes those descriptions. After you run analyze, you can inspect the files yourself under ~/.llmoji/bundle/ before you choose to upload.
Quick start
pip install llmoji
llmoji install # autodetect: install for every detected harness
# or, target a single harness explicitly:
llmoji install claude_code # or: codex, hermes, opencode, openclaw
From now on, your agent will use kaomoji at the start of each message.
After letting it run for a week or so:
export ANTHROPIC_API_KEY=...
llmoji status # check what's been logged
llmoji analyze # scrape + canonicalize + summarize
llmoji upload --target hf # pushes to a submission branch on a9lim/llmoji
# or:
llmoji upload --target email # opens mailto:
You can pick a different backend for analyze:
export OPENAI_API_KEY=...
llmoji analyze --backend openai # GPT-5.4 mini via the Responses API
# or:
llmoji analyze --backend local \ # any OpenAI-compatible endpoint
--base-url http://localhost:11434/v1 \
--model llama3.1
analyze caches per-instance descriptions at ~/.llmoji/cache/per_instance.jsonl keyed by content hash plus the synthesis model id, backend, and base URL. llmoji cache clear wipes it.
Install
pip install llmoji
This requires Python 3.11+. The runtime dependency footprint is four packages: anthropic, openai, huggingface_hub, and ruamel.yaml. Hooks run in bash and need jq.
From source:
git clone https://github.com/a9lim/llmoji
cd llmoji
pip install -e ".[dev]" # adds pytest + ruff
How it works
Journal capture
Llmoji first registers a UserPromptSubmit hook that injects a reminder on every turn, asking the model to begin its reply with a kaomoji. It then registers a Stop hook that fires once per assistant turn, that extracts the reply, strips the kaomoji from the body, and appends one JSONL row to ~/.<harness>/kaomoji-journal.jsonl. The schema is the same across every provider:
{"ts": "...", "model": "...", "cwd": "...", "kaomoji": "(◕‿◕)", "user_text": "...", "assistant_text": "..."}
Analysis
llmoji analyze scrapes every installed provider's journal plus any extra JSONL files under ~/.llmoji/journals/. For each entry a source model wrote, the chosen synthesizer model describes that specific instance. Then, it aggregates the descriptions for each unique kaomoji per model and writes an overall meaning. This summarized output is the only thing that ships in the bundle.
The synthesizer is one of three backends, chosen via --backend. The same synthesizer evaluates everything in a single analyze run, so the descriptions across source models are comparable.
| Backend | API | Default model |
|---|---|---|
anthropic |
Anthropic SDK, messages.create |
claude-haiku-4-5-20251001 |
openai |
OpenAI SDK, Responses API | gpt-5.4-mini-2026-03-17 |
local |
OpenAI-compatible Chat Completions endpoint | (set via --model) |
Bundle structure
analyze writes to ~/.llmoji/bundle/:
~/.llmoji/bundle/
manifest.json
claude-sonnet-4-6.jsonl
claude-opus-4-7.jsonl
gpt-5.5.jsonl
manifest.json: package version, the synthesis backend and model id used, a salted submitter id, generation timestamp, list of providers seen, per-source-model row counts, total synthesized rows, and anything you include as--notes.<source-model>.jsonl: one row per kaomoji as that model used it, with the synthesized meaning. The filename stem is the model id .
Privacy
| Tier | Where | Shipped on upload? |
|---|---|---|
| Raw user and assistant text | ~/.<harness>/kaomoji-journal.jsonl |
Never |
| Per-instance synthesizer paraphrase | ~/.llmoji/cache/per_instance.jsonl |
Never |
| Synthesized summaries and counts per model | ~/.llmoji/bundle/ |
Yes |
Please see SECURITY.md for the full privacy model.
Providers
llmoji install <provider> writes the hook or plugin file and registers it with the harness.
Bash hook providers
| Provider | Hook events | Settings format | Notes |
|---|---|---|---|
claude_code |
Stop, UserPromptSubmit | JSON | Stable, in daily use. |
codex |
Stop, UserPromptSubmit | JSON | Stable, in daily use. |
hermes |
post_llm_call, pre_llm_call | YAML | Subagent traffic is not currently filtered (no child id on the upstream payload). |
TS plugin providers
| Provider | Plugin location | Settings format | Notes |
|---|---|---|---|
opencode |
~/.config/opencode/plugins/llmoji.ts |
(none) | Auto-loaded by opencode; file presence is the registration. |
openclaw |
~/.openclaw/plugins/llmoji-kaomoji/ |
JSON | install also flips plugins.entries.llmoji-kaomoji.hooks.allowConversationAccess in config.json. |
install does not clobber existing config. llmoji uninstall <provider> removes the hooks (or plugin files) and the settings entry. Journals and the per-instance cache are preserved; wipe those with llmoji cache clear.
Static dumps
To pull kaomoji out of a static export:
llmoji parse --provider claude.ai ~/Downloads/data-...-batch-0000
llmoji parse --provider chatgpt ~/Downloads/chatgpt-export
llmoji parse --provider gemini ~/Downloads/aistudio-exports
llmoji parse --provider openhands ~/.openhands/conversations
| Source | Shape walked | Output journal |
|---|---|---|
claude.ai |
conversations.json |
claude_ai_export.jsonl |
chatgpt |
conversations.json |
chatgpt_export.jsonl |
gemini |
MyActivity.json |
gemini_aistudio_export.jsonl |
openhands |
<conversation>/events/event-NNNNN-<id>.json |
openhands_export.jsonl |
For Claude Code, Codex, or Hermes history that predates installing the live hook, the historical transcripts can be replayed into the journals via llmoji import <provider>.
Custom harness
For harnesses we don't ship a first-class adapter for:
- Append one row per kaomoji-bearing assistant turn to
~/.llmoji/journals/<harness>.jsonl. - Use the canonical six-field schema:
{ts, model, cwd, kaomoji, user_text, assistant_text}. - Strip the leading kaomoji from
assistant_texton the way in (the prefix lives in thekaomojifield). - Validate the prefix the same way the package does:
llmoji.taxonomy.is_kaomoji_candidate(prefix).
llmoji analyze picks up everything under ~/.llmoji/journals/ automatically.
The Python module llmoji.taxonomy is the canonical source for the validator. If you're porting the validator to another language, please mirror the rules in is_kaomoji_candidate; the canonical TS port lives at llmoji/_plugins/_kaomoji_taxonomy.ts.partial. Bumping any of the rules is a cross-corpus invariant change on the package side and your port needs to follow.
Tests
pytest tests/ # everything
pytest tests/test_canonicalize.py # rule-by-rule regression for canonicalize_kaomoji and extract
pytest tests/test_public_surface.py # locks the cross-corpus invariant contract
The full suite runs anywhere. CI runs ruff check . and pytest on every PR.
The public-surface test exercises taxonomy invariants, synth-prompt content checks, the synthesizer factory dispatch, provider rendering plus bash -n validation of every hook template, the bundle allowlist, the corrupt-config refusal paths, and the unified mask_kaomoji prepend contract. The canonicalize tests run rule-by-rule.
Prior art
Llmoji replicates and expands eriskii's Claude-faces catalog, the original post that came up with the idea of prompting and tracking Claude's kaomoji use. The shared HuggingFace dataset extends that pipeline across many users, many harnesses, and many model releases.
Contributing and security
Please see CONTRIBUTING.md for dev setup. For security and privacy, please see SECURITY.md.
License
GPL-3.0-or-later. See LICENSE. The companion research repo llmoji-study is CC-BY-SA-4.0. The shared corpus on HuggingFace is also CC-BY-SA-4.0; running llmoji upload --target hf contributes a bundle under those terms.
If you use llmoji or the central corpus in published research, please cite this repository.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llmoji-1.3.0.tar.gz.
File metadata
- Download URL: llmoji-1.3.0.tar.gz
- Upload date:
- Size: 198.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c9305296a8bf989f8195c37680559d0dd4e4fb119765ee2b11471f7a8b87981c
|
|
| MD5 |
a4c0b5d7515efc7036f6ff2b24bf6dba
|
|
| BLAKE2b-256 |
4f2dbad5fd82b47ee82c9ef8c0e5be8328ba0b57f508b886e5b713aeb69511c0
|
Provenance
The following attestation bundles were made for llmoji-1.3.0.tar.gz:
Publisher:
release.yml on a9lim/llmoji
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llmoji-1.3.0.tar.gz -
Subject digest:
c9305296a8bf989f8195c37680559d0dd4e4fb119765ee2b11471f7a8b87981c - Sigstore transparency entry: 1429941835
- Sigstore integration time:
-
Permalink:
a9lim/llmoji@bd42cfb171038cb427a41dad9960aab0d49c5fda -
Branch / Tag:
refs/heads/main - Owner: https://github.com/a9lim
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@bd42cfb171038cb427a41dad9960aab0d49c5fda -
Trigger Event:
push
-
Statement type:
File details
Details for the file llmoji-1.3.0-py3-none-any.whl.
File metadata
- Download URL: llmoji-1.3.0-py3-none-any.whl
- Upload date:
- Size: 150.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c8147c1478034efed69032c00b38a88985c093d64660825a8256c6d01d8fe11e
|
|
| MD5 |
71bdd2c6ad3cfb21775de04c6f407e16
|
|
| BLAKE2b-256 |
ddccfd8680a679687649f2bfc9ce2c2e37dea5a707a4620ee18d521d06cbf578
|
Provenance
The following attestation bundles were made for llmoji-1.3.0-py3-none-any.whl:
Publisher:
release.yml on a9lim/llmoji
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llmoji-1.3.0-py3-none-any.whl -
Subject digest:
c8147c1478034efed69032c00b38a88985c093d64660825a8256c6d01d8fe11e - Sigstore transparency entry: 1429941837
- Sigstore integration time:
-
Permalink:
a9lim/llmoji@bd42cfb171038cb427a41dad9960aab0d49c5fda -
Branch / Tag:
refs/heads/main - Owner: https://github.com/a9lim
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@bd42cfb171038cb427a41dad9960aab0d49c5fda -
Trigger Event:
push
-
Statement type: