Personalized tools to add functionality to SolveIt
Project description
solveit_dmtools
Usage and examples
from dialoghelper.core import *
from solveit_dmtools import * # core, dhb, dhp, fab
await add_msg(content=dhb.doc, msg_type='note')
'_bbf1dbe3'
Backup Chat for SolveIt using dialoghelper and lisette
Sometimes we may have a problem in SolveIt while Sonnet is down (E300), or maybe we want a different perspective.
This module helps us to leverage any other LLM that is available to LiteLLM by providing our own keys and the model name.
Usage:
from solveit_dmtools import dhb
# then in another cell
# bc = dhb.c() to search model names
bc = dhb.c("model-name")
# then in another cell
bc("Hi")
First calling with no model will prompt you to type in part of a model name to search
bc = dhb.c()
Please try again by using e.g. `bc = dhb.c('model_name')` with a model name e.g. pick from these found by searching for 'codex':
azure/codex-mini
azure/eu/gpt-5.1-codex
azure/eu/gpt-5.1-codex-mini
azure/global/gpt-5.1-codex
azure/global/gpt-5.1-codex-mini
azure/gpt-5.1-codex-2025-11-13
azure/gpt-5.1-codex-mini-2025-11-13
azure/gpt-5-codex
azure/gpt-5.1-codex
azure/gpt-5.1-codex-max
azure/gpt-5.1-codex-mini
azure/gpt-5.2-codex
azure/us/gpt-5.1-codex
azure/us/gpt-5.1-codex-mini
codex-mini-latest
github_copilot/gpt-5.1-codex-max
github_copilot/gpt-5.3-codex
chatgpt/gpt-5.2-codex
chatgpt/gpt-5.1-codex-max
chatgpt/gpt-5.1-codex-mini
gpt-5-codex
gpt-5.1-codex
gpt-5.1-codex-max
gpt-5.1-codex-mini
gpt-5.2-codex
openrouter/openai/gpt-5-codex
openrouter/openai/gpt-5.2-codex
### The following ones are listed by OpenRouter but not LiteLLM (may still work)
bc = dhb.c("openrouter/openai/gpt-5-codex")
# bc = dhb.c("openrouter/moonshotai/kimi-k2.5")
The following will be automatically commented out when run, then a prompt cell is added after it with input/output from the other LLM.
bc("Hi, can you use tools?")
I can request web content through the read_url tool if we decide it’s
needed, but I’d check with you first since those calls can be pricey.
What did you have in mind to explore?
- id:
gen-1771774291-jp1DHcbRVP0X3fAEttth - model:
openai/gpt-5-codex - finish_reason:
stop - usage:
Usage(completion_tokens=201, prompt_tokens=2229, total_tokens=2430, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=128, rejected_prediction_tokens=None, text_tokens=None, image_tokens=0), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cost=0.00479625, is_byok=False, cost_details={'upstream_inference_cost': 0.00479625, 'upstream_inference_prompt_cost': 0.00278625, 'upstream_inference_completions_cost': 0.00201})
Prompt (openrouter/openai/gpt-5-codex): Hi, can you use tools?
def bad_joke():
"Tells a bad joke"
return "Why are engineers bad at telling jokes timing?"
bc.add_tools('bad_joke')
len( bc.tool_schemas)
2
The following will be automatically commented out when run, then a prompt cell is added after it with input/output from the other LLM.
bc("Tell me a bad joke using your tools please")
Here’s what the bad_joke tool came up with: “Why are engineers bad at
telling jokes timing?”
- id:
gen-1771774296-gt42Tm4e9H4qfwlkjkUW - model:
openai/gpt-5-codex - finish_reason:
stop - usage:
Usage(completion_tokens=122, prompt_tokens=2889, total_tokens=3011, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=64, rejected_prediction_tokens=None, text_tokens=None, image_tokens=0), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cost=0.00483125, is_byok=False, cost_details={'upstream_inference_cost': 0.00483125, 'upstream_inference_prompt_cost': 0.00361125, 'upstream_inference_completions_cost': 0.00122})
Prompt (openrouter/openai/gpt-5-codex): Tell me a bad joke using your tools please
bc("Can you please read https://raw.githubusercontent.com/AnswerDotAI/fhdaisy/refs/heads/main/README.md and give the elevator pitch and some sample code? No need to store the raw content.")
Reading that README means fetching content from the URL, which can be a
bit costly. Should I go ahead and use the read_url tool to grab it?
- id:
gen-1771774298-F37yjrFaAZNZMEXj6e0R - model:
openai/gpt-5-codex - finish_reason:
stop - usage:
Usage(completion_tokens=126, prompt_tokens=3412, total_tokens=3538, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=64, rejected_prediction_tokens=None, text_tokens=None, image_tokens=0), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cost=0.005525, is_byok=False, cost_details={'upstream_inference_cost': 0.005525, 'upstream_inference_prompt_cost': 0.004265, 'upstream_inference_completions_cost': 0.00126})
Prompt (openrouter/openai/gpt-5-codex): Can you please read https://raw.githubusercontent.com/AnswerDotAI/fhdaisy/refs/heads/main/README.md and give the elevator pitch and some sample code? No need to store the raw content.
The above is built-in to ask for confirmation… some models will not bother to ask each time, though.
bc("It's OK, go ahead")
Elevator pitch:
fhdaisy gives FastHTML developers a Python-native way to use DaisyUI.
Instead of hand-writing HTML with a pile of Tailwind/DaisyUI class
names, you drop in components like Btn, Card, or Alert. Each knows
the right HTML tag, auto-includes the base DaisyUI class, and lets you
add modifiers with terse shorthands (e.g., cls='-primary' expands to
btn btn-primary). The result is a clean, maintainable FastHTML
codebase that still takes full advantage of DaisyUI’s themes, responsive
design, and accessibility.
Tiny taste of the API (run inside your FastHTML app/notebook):
from fhdaisy import Btn
Btn('Hey there', cls='-primary')
Would you like to explore how to build out a more complex component—maybe a card or modal—using the same pattern?
- id:
gen-1771774307-dvqERU4z2W8XtQEYmAGe - model:
openai/gpt-5-codex - finish_reason:
stop - usage:
Usage(completion_tokens=726, prompt_tokens=5190, total_tokens=5916, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=512, rejected_prediction_tokens=None, text_tokens=None, image_tokens=0), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=3968, text_tokens=None, image_tokens=None), cost=0.0092835, is_byok=False, cost_details={'upstream_inference_cost': 0.0092835, 'upstream_inference_prompt_cost': 0.0020235, 'upstream_inference_completions_cost': 0.00726})
await add_msg(content=dhp.doc, msg_type="note")
'_ee75feab'
Dialog Helper for Polya’s Problem-Solving Method
This module provides quick access to Polya’s four-stage problem-solving process through interactive prompts.
Each stage has multiple questions/prompts: - To preview one you can just
print it/type its name in a cell and hit Submit - e.g. dhp.act.next
shows you “(prompt) What is next?”. - To execute one you call it by
adding () after the name - e.g. dhp.act.next() will replace the
current message cell with a prompt cell having “What is next?” in it -
It will be automatically executed, you can hit Esc to stop it and/or
Enter to edit the prompt
TYPICAL FLOW: - Start with dhp.u (understand) (even briefly) to
clarify your understanding of the problem - Move to dhp.p (plan) to
develop initial strategies - Switch between plan and act (or a, or
x/execute) as you develop and test approaches - Use dhp.r
(review) to gain deeper understanding of your approach and findings -
You might loop back to other steps after r (review)
If you feel stuck, run dhp.help() and it will submit the prompt cell
it creates - SolveIt will help you pick a next prompt!
UNDERSTAND STAGE - Clarify the problem before solving - dhp.u.summary() - Creates a prompt cell asking SolveIt to give a concise summary of the problem - dhp.u.info() - Creates a prompt cell asking SolveIt to inventory known/unknown information - dhp.u.similar() - Creates a prompt cell asking SolveIt if it has seen a similar problem before - dhp.u.lateral() - Creates a prompt cell to explore problem relationships and scope - dhp.u.related() - Creates a prompt cell to identify similar or simpler problems - dhp.u.viz() - Creates a prompt cell asking SolveIt to create a figure or diagram to represent the problem - dhp.u.notation() - Creates a prompt cell asking SolveIt to pick suitable notation (symbols for quantities/data, states, transitions) - dhp.u.simplest() - Creates a prompt cell asking SolveIt for the simplest way to look at the problem - dhp.u.simplify() - Creates a prompt cell asking SolveIt to separate problem parts (break down complex conditions into simpler ones)
PLAN STAGE - Develop strategies and approaches - dhp.p.chunks() - Creates a prompt cell asking SolveIt to break down the problem into smaller sub-problems - dhp.p.partial() - Creates a prompt cell asking SolveIt if there’s a smaller part or representation of the problem to solve - dhp.p.known_approach() - Creates a prompt cell asking SolveIt to use a known algorithm or library to solve the problem - dhp.p.verifiable() - Creates a prompt cell asking SolveIt how to verify if the solution is consistent and correct - dhp.p.backward() - Creates a prompt cell asking SolveIt to work backward from the desired result - dhp.p.aux() - Creates a prompt cell asking SolveIt to use an auxiliary element (variable, diagram, or example) to clarify the path - dhp.p.analogy() - Creates a prompt cell asking SolveIt to use analogy or similarity to relate the problem to a known solution - dhp.p.review() - Creates a prompt cell asking SolveIt to critique the plan of attack (be frank and critical)
ACT STAGE - Execute your plan while monitoring progress - dhp.a.all() - Creates a prompt cell asking SolveIt if we covered all of the data or examples for this step - dhp.a.check() - Creates a prompt cell asking SolveIt if this step seems correct - dhp.a.doubt() - Creates a prompt cell asking SolveIt if we’re using the right approach - dhp.a.next() - Creates a prompt cell asking SolveIt what is next - dhp.a.other() - Creates a prompt cell asking SolveIt if there’s another way to look at this - dhp.a.partial() - Creates a prompt cell asking SolveIt about intermediate results or milestones to aim for - dhp.a.simpler() - Creates a prompt cell asking SolveIt if there was a simpler way to do this step - dhp.a.symmetry() - Creates a prompt cell asking SolveIt about symmetries or patterns in the problem to exploit - dhp.a.valid() - Creates a prompt cell asking SolveIt if this step was a valid step
REVIEW STAGE - Verify results, reflect on process, and extract lessons - dhp.r.all() - Creates a prompt cell asking SolveIt if we covered all of the data or examples for this problem - dhp.r.alter() - Creates a prompt cell asking SolveIt for alternative solutions or approaches that might be more efficient or effective - dhp.r.general() - Creates a prompt cell asking SolveIt if we can generalize the solution to other similar problems - dhp.r.grok() - Creates a note cell with the text “To consider: Can I understand the solution without having to perform all the steps?” - dhp.r.learned() - Creates a prompt cell asking SolveIt what lessons have been learned from this - dhp.r.mistakes() - Creates a prompt cell asking SolveIt about common mistakes made - dhp.r.other() - Creates a prompt cell asking SolveIt if we can derive the result differently - dhp.r.principles() - Creates a prompt cell asking SolveIt to identify underlying principles or patterns that emerged during the solution process - dhp.r.sanity() - Creates a prompt cell asking SolveIt if the result makes sense and can be verified by substitution or another method - dhp.r.simpler() - Creates a prompt cell asking SolveIt if we can derive the result in a simpler way - dhp.r.test() - Creates a prompt cell asking SolveIt for different ways to test this
The following is just dhp.help()
Please pick an appropriate next-step/prompt from the below:
doc: UNDERSTAND STAGE - Clarify the problem before solving
- dhp.u.summary() - Creates a prompt cell asking SolveIt to give a concise summary of the problem
- dhp.u.info() - Creates a prompt cell asking SolveIt to inventory known/unknown information
- dhp.u.similar() - Creates a prompt cell asking SolveIt if it has seen a similar problem before
- dhp.u.lateral() - Creates a prompt cell to explore problem relationships and scope
- dhp.u.related() - Creates a prompt cell to identify similar or simpler problems
- dhp.u.viz() - Creates a prompt cell asking SolveIt to create a figure or diagram to represent the problem
- dhp.u.notation() - Creates a prompt cell asking SolveIt to pick suitable notation (symbols for quantities/data, states, transitions)
- dhp.u.simplest() - Creates a prompt cell asking SolveIt for the simplest way to look at the problem
- dhp.u.simplify() - Creates a prompt cell asking SolveIt to separate problem parts (break down complex conditions into simpler ones)
info: (prompt) “What information do we have? What information do we not have? What might change as we learn more?” lateral: (prompt) “Can you relate this problem to a more general or a more specific problem?” notation: (prompt) “Can we pick a suitable notation (e.g. symbols for quantities/data, states, and transitions)?” related: (prompt) “Can you think of a related problem that we can solve? It could even be a simpler one we could solve first to help understand this one.” similar: (prompt) “Have you seen a similar problem before?” simplest: (prompt) “What might be the simplest way to look at this problem?” simplify: (prompt) “Can we separate the various parts of the problem (e.g. break down complex conditions into simpler ones)?” summary: (prompt) “Could you please give a concise summary of the problem?” viz: (prompt) “Can we create a figure or diagram to represent the problem?”
doc: PLAN STAGE - Develop strategies and approaches
- dhp.p.chunks() - Creates a prompt cell asking SolveIt to break down the problem into smaller sub-problems
- dhp.p.partial() - Creates a prompt cell asking SolveIt if there’s a smaller part or representation of the problem to solve
- dhp.p.known_approach() - Creates a prompt cell asking SolveIt to use a known algorithm or library to solve the problem
- dhp.p.verifiable() - Creates a prompt cell asking SolveIt how to verify if the solution is consistent and correct
- dhp.p.backward() - Creates a prompt cell asking SolveIt to work backward from the desired result
- dhp.p.aux() - Creates a prompt cell asking SolveIt to use an auxiliary element (variable, diagram, or example) to clarify the path
- dhp.p.analogy() - Creates a prompt cell asking SolveIt to use analogy or similarity to relate the problem to a known solution
- dhp.p.review() - Creates a prompt cell asking SolveIt to critique the plan of attack (be frank and critical)
analogy: (prompt) “Can you use analogy or similarity to relate the problem to a known solution?” aux: (prompt) “Could we use an auxiliary element (e.g., a variable, diagram, or example) to clarify the path?” backward: (prompt) “Can we work backward from the desired result?” chunks: (prompt) “Could you please break down the problem into smaller sub-problems?” known_approach: (prompt) “Could we use a known algorithm or library to solve the problem, or some of it?” partial: (prompt) “Is there a smaller part or representation of the problem we could solve?” review: (prompt) “Could you please critique the plan of attack? Be frank, do not be afraid to be critical.” verifiable: (prompt) “How would we verify if our solution is consistent and correct?”
doc: ACT STAGE - Execute your plan while monitoring progress
- dhp.a.doubt() - Creates a prompt cell asking SolveIt if we’re using the right approach
- dhp.a.other() - Creates a prompt cell asking SolveIt if there’s another way to look at this
- dhp.a.partial() - Creates a prompt cell asking SolveIt about intermediate results or milestones to aim for
- dhp.a.symmetry() - Creates a prompt cell asking SolveIt about symmetries or patterns in the problem to exploit
- dhp.a.next() - Creates a prompt cell asking SolveIt what is next
- dhp.a.valid() - Creates a prompt cell asking SolveIt if this step was a valid step
- dhp.a.check() - Creates a prompt cell asking SolveIt if this step seems correct
- dhp.a.simpler() - Creates a prompt cell asking SolveIt if there was a simpler way to do this step
- dhp.a.all() - Creates a prompt cell asking SolveIt if we covered all of the data or examples for this step
all: (prompt) “Did we cover all of the data or examples for this step?” check: (prompt) “Does this step seem correct?” doubt: (prompt) “Are we using the right approach?” next: (prompt) “What is next?” other: (prompt) “Is there another way to look at this?” partial: (prompt) “Are there any intermediate results or milestones that we can aim for?” simpler: (prompt) “Was there a simpler way we could have done this step?” symmetry: (prompt) “Are there any symmetries or patterns in the problem that we can exploit?” valid: (prompt) “Does this step seem to have been a valid step?”
doc: REVIEW STAGE - Verify results, reflect on process, and extract lessons
- dhp.r.all() - Creates a prompt cell asking SolveIt if we covered all of the data or examples for this problem
- dhp.r.sanity() - Creates a prompt cell asking SolveIt if the result makes sense and can be verified by substitution or another method
- dhp.r.grok() - Creates a note cell with the text “To consider: Can I understand the solution without having to perform all the steps?”
- dhp.r.learned() - Creates a prompt cell asking SolveIt what lessons have been learned from this
- dhp.r.general() - Creates a prompt cell asking SolveIt if we can generalize the solution to other similar problems
- dhp.r.alter() - Creates a prompt cell asking SolveIt for alternative solutions or approaches that might be more efficient or effective
- dhp.r.other() - Creates a prompt cell asking SolveIt if we can derive the result differently
- dhp.r.mistakes() - Creates a prompt cell asking SolveIt about common mistakes made
- dhp.r.simpler() - Creates a prompt cell asking SolveIt if we can derive the result in a simpler way
- dhp.r.principles() - Creates a prompt cell asking SolveIt to identify underlying principles or patterns that emerged during the solution process
- dhp.r.test() - Creates a prompt cell asking SolveIt for different ways to test this
all: (prompt) “Did we cover all of the data or examples for this problem?” alter: (prompt) “Can you think of alternative solutions or approaches that might be more efficient or effective?” general: (prompt) “Can we generalize the solution to other similar problems?” grok: (note) “To consider: Can I understand the solution without having to perform all the steps?” learned: (prompt) “What lessons have I learned from this?” mistakes: (prompt) “What were my common mistakes?” other: (prompt) “Can we derive the result differently?” principles: (prompt) “Can you identify any underlying principles or patterns that emerged during the solution process?” sanity: (prompt) “Does the result make sense? Can we verify by substition or another method?” simpler: (prompt) “Can we derive the result in a simpler way?” test: (prompt) “What are some different ways we can test this?”
’Dialog Helper for Polya's Problem-Solving Methodmodule provides
quick access to Polya's four-stage problem-solving processinteractive
prompts.stage has multiple questions/prompts:- To preview one you can
just print it/type its name in a cell and hit Submit-
e.g. dhp.act.next shows you “(prompt) What is next?”.- To execute one
you call it by adding () after the name- e.g. dhp.act.next() will
replace the current message cell with a prompt cell having “What is
next?” in it- It will be automatically executed, you can hit Esc to stop
it and/or Enter to edit the prompt*TYPICAL FLOW:**- Start with
dhp.u (understand) (even briefly) to clarify your understanding of
the problem- Move to dhp.p (plan) to develop initial strategies-
Switch between plan and act (or a, or x/execute) as you
develop and test approaches- Use dhp.r (review) to gain deeper
understanding of your approach and findings- You might loop back to
other steps after r (review)you feel stuck, run dhp.help() and it
will submit the prompt cell it creates - SolveIt will help you pick a
next prompt!*UNDERSTAND STAGE - Clarify the problem before solving**-
dhp.u.summary() - Creates a prompt cell asking SolveIt to give a
concise summary of the problem- dhp.u.info() - Creates a prompt cell
asking SolveIt to inventory known/unknown information-
dhp.u.similar() - Creates a prompt cell asking SolveIt if it has seen
a similar problem before- dhp.u.lateral() - Creates a prompt cell to
explore problem relationships and scope- dhp.u.related() - Creates a
prompt cell to identify similar or simpler problems- dhp.u.viz() -
Creates a prompt cell asking SolveIt to create a figure or diagram
to represent the problem- dhp.u.notation() - Creates a prompt cell
asking SolveIt to pick suitable notation (symbols for
quantities/data, states, transitions)- dhp.u.simplest() - Creates a
prompt cell asking SolveIt for the simplest way to look at the
problem- dhp.u.simplify() - Creates a prompt cell asking SolveIt to
separate problem parts** (break down complex conditions into simpler
ones)*PLAN STAGE - Develop strategies and approaches-
dhp.p.chunks() - Creates a prompt cell asking SolveIt to break down
the problem into smaller sub-problems- dhp.p.partial() - Creates a
prompt cell asking SolveIt if there's a smaller part or
representation of the problem to solve- dhp.p.known_approach() -
Creates a prompt cell asking SolveIt to use a known algorithm or
library to solve the problem- dhp.p.verifiable() - Creates a prompt
cell asking SolveIt how to verify if the solution is consistent and
correct- dhp.p.backward() - Creates a prompt cell asking SolveIt to
work backward from the desired result- dhp.p.aux() - Creates a
prompt cell asking SolveIt to use an auxiliary element (variable,
diagram, or example) to clarify the path- dhp.p.analogy() - Creates a
prompt cell asking SolveIt to use analogy or similarity to relate
the problem to a known solution- dhp.p.review() - Creates a prompt cell
asking SolveIt to critique the plan of attack** (be frank and
critical)*ACT STAGE - Execute your plan while monitoring progress-
dhp.a.all() - Creates a prompt cell asking SolveIt if we covered all
of the data or examples for this step- dhp.a.check() - Creates a
prompt cell asking SolveIt if this step seems correct-
dhp.a.doubt() - Creates a prompt cell asking SolveIt if we're using
the right approach- dhp.a.next() - Creates a prompt cell asking
SolveIt what is next- dhp.a.other() - Creates a prompt cell asking
SolveIt if there's another way to look at this- dhp.a.partial() -
Creates a prompt cell asking SolveIt about intermediate results or
milestones to aim for- dhp.a.simpler() - Creates a prompt cell asking
SolveIt if there was a simpler way to do this step-
dhp.a.symmetry() - Creates a prompt cell asking SolveIt about
symmetries or patterns in the problem to exploit- dhp.a.valid() -
Creates a prompt cell asking SolveIt if this step was a valid
step***REVIEW STAGE - Verify results, reflect on process, and extract
lessons- dhp.r.all() - Creates a prompt cell asking SolveIt if we
covered all of the data or examples for this problem-
dhp.r.alter() - Creates a prompt cell asking SolveIt for alternative
solutions or approaches that might be more efficient or effective-
dhp.r.general() - Creates a prompt cell asking SolveIt if we can
generalize the solution to other similar problems- dhp.r.grok() -
Creates a note cell with the text “To consider: Can I understand the
solution without having to perform all the steps?”- dhp.r.learned() -
Creates a prompt cell asking SolveIt what lessons have been learned
from this- dhp.r.mistakes() - Creates a prompt cell asking SolveIt about
common mistakes made- dhp.r.other() - Creates a prompt cell asking
SolveIt if we can derive the result differently-
dhp.r.principles() - Creates a prompt cell asking SolveIt to identify
underlying principles or patterns that emerged during the solution
process- dhp.r.sanity() - Creates a prompt cell asking SolveIt if the
result makes sense and can be verified by substitution or another
method- dhp.r.simpler() - Creates a prompt cell asking SolveIt if we can
derive the result in a simpler way- dhp.r.test() - Creates a prompt
cell asking SolveIt for **different ways to test** this’
# Uncomment and submit the line below if you do not already have a copy of fabric in your fabric folder
#!git clone --depth 1 https://github.com/danielmiessler/fabric.git
await add_msg(content=fab.doc, msg_type="note")
'_3719b4eb'
fab - Open Source ‘fabric’ prompts made quickly available in SolveIt
This module leverages over 200 open source LLM prompts that are available in Daniel Miesller’s ‘fabric’ project.
If you import as fab, Submit the following to see an overview of all the
prompts: fab.p
HOW TO USE IT
Most Common Syntax: prompt="Your Prompt" in one cell then
fab.p.pattern_name() in another, where pattern_name is any of the 200+
available fabric patterns.
MOST IMPORTANT AND USED OPTIONS AND FEATURES
-
Variable Targeting: Use
fab.p.pattern_name('variable_name')to process content from a specific variable instead of the default ‘prompt’ variable. -
Pattern Discovery: Use
fab.p.help()(an alias for suggest_pattern()) to get suggestions of which pattern to pick for your prompt. -
Compression Feature: Use
fab.compress()after running a pattern to save tokens by marking the previous cell as skipped and compressing the output to a new note. -
Default Variable: Most patterns work with a variable called ‘prompt’ by default, making it easy to process your main content.
COMMON PATTERNS
- For Summarizing Content:
fab.p.summarize() - For Explaining Code:
fab.p.explain_code() - For Analyzing Claims:
fab.p.analyze_claims() - For Extracting Wisdom from Text:
fab.p.extract_wisdom() - For Creating Quizzes:
fab.p.create_quiz()
fab.p.explain_code
Fabric pattern: explain_code - Analyze/explain code, security tool outputs, and configs.
prompt = read_url("https://raw.githubusercontent.com/shuane/blogtopod/refs/heads/main/blogtopod.py")
The next cell is running fab.p.explain_code()
(From fab.p.explain_code folded below) # IDENTITY and PURPOSE
You are an expert coder that takes code and documentation as input and do your best to explain it.
Take a deep breath and think step by step about how to best accomplish this goal using the following steps. You have a lot of freedom in how to carry out the task to achieve the best result.
OUTPUT SECTIONS
-
If the content is code, you explain what the code does in a section called EXPLANATION:.
-
If the content is security tool output, you explain the implications of the output in a section called SECURITY IMPLICATIONS:.
-
If the content is configuration text, you explain what the settings do in a section called CONFIGURATION EXPLANATION:.
-
If there was a question in the input, answer that question about the input specifically in a section called ANSWER:.
OUTPUT
- Do not output warnings or notes—just the requested sections.
INPUT:
INPUT:
$prompt
The next cell is running fab.compress()
Prompt (From fab.p.explain_code) for $prompt
Let me fetch the code first so I can explain it.
read_url(url=‘<…>…w.githubusercontent.com/shuane…</…>’, as_md=False)
{
"id": "toolu_01BgmX6VMTLe4aaW1qX9UycV",
"call": {
"function": "read_url",
"arguments": {
"url": "https://raw.githubusercontent.com/shuane/blogtopod/refs/heads/main/blogtopod.py",
"as_md": "False"
}
},
"result": "<TRUNCATED>\u2026ipt\\n# requires-python = \">=3.12,<=3.13\"\\n# dependencies = [\\n# \"marimo\",\\n# \"gaspare==0.3.3\",\\n# \"openai==1.78.1\",\\n# \"pydub==0.25.1\",\\n# \"contextkit==0.0.6\",\\n# ]\\n# ///\\n\\nimport marimo\\n\\n__generated_with = \"0.11.31\"\\napp = marimo.App(width=\"medium\")\\n\\n\\n@app.cell\\ndef _():\\n import marimo as mo\\n import os\\n from openai import OpenAI\\n import gaspare as gp\\n import unicodedata\\n import re\\n from pathlib import Path\\n from contextkit.read import read_url\\n from io import BytesIO\\n from pydub import AudioSegment\\n return (\\n AudioSegment,\\n BytesIO,\\n OpenAI,\\n Path,\\n gp,\\n mo,\\n os,\\n re,\\n read_url,\\n unicodedata,\\n )\\n\\n\\n@app.cell\\ndef _(mo):\\n mo.md(\\n r\"\"\"\\n **Note**: This script assumes you have your OPENAI_API_KEY and GEMINI_API_KEY environment variables set\\n\\n - Gemini is used for making the script\\n \u2026</TRUNCATED>"
}
EXPLANATION:
blogtopod.py is a Marimo reactive notebook app that converts a blog post (or any web page) into a two-host podcast audio file. Here’s how it works, stage by stage:
1. Setup & Dependencies The script uses marimo for its reactive
UI, openai for text-to-speech, gaspare (a lightweight Gemini
wrapper) for script generation, pydub for audio manipulation, and
contextkit for fetching web content.
2. UI Inputs A Marimo cell creates interactive widgets: a URL to
read, an output filename, two host names and their OpenAI TTS voices, an
approximate length slider (3–60 min), and a “Run!” button. Everything is
gated behind that button via mo.stop(not run_button.value).
3. Fetch the blog post When you click Run, it uses
contextkit.read.read_url
to grab the page content from the provided URL.
4. Build a podcast script prompt The fetched content is inserted
into a detailed LLM prompt that asks for a two-host conversational
script with distinct host personalities (one enthusiastic/curious, one
analytical/thoughtful), natural transitions, and a specific
speaker-label format (ALEX:, JAMIE:).
5. Generate the script with Gemini It sends that prompt to
gemini-2.5-pro-preview-03-25 via gaspare.Chat and parses the
response into segments using parse_podcast_script, which splits on
SPEAKER: labels.
6. Text cleanup clean_text_for_tts fixes broken Unicode (curly
quotes, ellipses, zero-width spaces), strips non-ASCII, and normalizes
the text so the TTS engine doesn’t stumble.
7. Generate audio with OpenAI TTS Each segment is sent to OpenAI’s
gpt-4o-mini-tts model, with the appropriate voice for each host. Audio
chunks are streamed into BytesIO buffers and converted to
pydub.AudioSegment objects.
8. Combine & export All segments are concatenated with
sum(parts, AudioSegment.empty()), previewed in the notebook, and
exported as an MP3. The script text is also saved as a .md file
alongside it.
In short: paste a URL, click Run, and you get a podcast MP3 with two AI hosts discussing the article — Gemini writes the script, OpenAI voices it.
To go back full-circle, we can have a different LLM correct the results…
bc("Can you compare your answer to https://raw.githubusercontent.com/shuane/blogtopod/refs/heads/main/README.md and adjust if needed? It's OK, it's not a big download, so go ahead and read it.")
I’ve cross-checked the earlier explanation against the project’s README. The overall description of the workflow still holds—however, there are a few implementation details worth weaving in so the summary lines up with the published docs:
- How to run it: the README recommends
uvx marimo run --sandbox https://raw.githubusercontent.com/shuane/blogtopod/refs/heads/main/blogtopod.py, with a note that saying “no” to the Docker prompt keeps it in the current folder. There’s also a traditionalpip install -r requirements.txt+marimo edit blogtopod.pypath. - Environment setup: both
OPENAI_API_KEYandGEMINI_API_KEYmust be defined before running; the README walks through grabbing keys from OpenAI and Google AI Studio and exporting them. - Runtime expectations: the script may take 6–10 minutes to finish
and will produce both
your_file.mp3and a matchingyour_file_script.md. - Version constraints:
pydubcurrently requires Python earlier than 3.13, so stick to the 3.12 line the README calls out.
If we fold those details into the earlier summary, it becomes fully consistent with the README. Would you like me to rewrite the combined summary now, or is there a particular section (installation, environment setup, runtime behavior) you want to focus on first?
- id:
gen-1771774452-DzZSXggvfnJe6LWrYXHf - model:
openai/gpt-5-codex - finish_reason:
stop - usage:
Usage(completion_tokens=556, prompt_tokens=16775, total_tokens=17331, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=256, rejected_prediction_tokens=None, text_tokens=None, image_tokens=0), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=15744, text_tokens=None, image_tokens=None), cost=0.00881675, is_byok=False, cost_details={'upstream_inference_cost': 0.00881675, 'upstream_inference_prompt_cost': 0.00325675, 'upstream_inference_completions_cost': 0.00556})
Prompt (openrouter/openai/gpt-5-codex): Can you compare your answer to https://raw.githubusercontent.com/shuane/blogtopod/refs/heads/main/README.md and adjust if needed? It’s OK, it’s not a big download, so go ahead and read it.
Developer Guide
If you are new to using nbdev here are some useful pointers to get you
started.
Installing solveit_dmtools in Development mode
- local dev install:
$ pip install -e .
- make changes under nbs/ directory: …
- prepare for commit:
$ alias nbprep='nbdev-clean ; nbdev-export'
$ nbprep
#!pip install -e ..
# OR
#!pip install -Uqq solveit_dmtools
Installation
Install latest from the GitHub repository:
$ pip install git+https://github.com/shuane/solveit_dmtools.git
or from pypi
$ pip install solveit_dmtools
Documentation
Documentation can be found hosted on this GitHub repository’s pages. Additionally you can find package manager specific guidelines on the pypi site.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file solveit_dmtools-0.0.28.tar.gz.
File metadata
- Download URL: solveit_dmtools-0.0.28.tar.gz
- Upload date:
- Size: 44.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6a5c3f94567e23d3155a1af8d76d830b3ba78a62514ec71b7779cf3902c3ed4b
|
|
| MD5 |
1d008dadf1f2fd56854a655ac03a21c1
|
|
| BLAKE2b-256 |
8fc77153df2fab79a8ca26c4cfb2f7f3a40e66fac7d6c019077096d9d8dea8f0
|
File details
Details for the file solveit_dmtools-0.0.28-py3-none-any.whl.
File metadata
- Download URL: solveit_dmtools-0.0.28-py3-none-any.whl
- Upload date:
- Size: 29.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b89d8cd6249470605705d59ca3dafd562275c0dc7a32a1473931eae8301a4dad
|
|
| MD5 |
0bf9ade07610609195e2d5ef70e30d79
|
|
| BLAKE2b-256 |
2f37dfbc1e717b132af0ca44428113edf7589ff99e76d28d63aa679a910a2bca
|