Skip to main content

LM Studio-powered llms.txt generator based on DSPy tutorials.

Project description


title: "LM Studio llms.txt Generator" description: "Generate llms.txt, llms-full, and fallback artifacts for GitHub repositories using DSPy with LM Studio."

PyPI Downloads TestPyPI CI Repo

Overview

Use this CLI-first toolkit to produce LLM-friendly documentation bundles (llms.txt, llms-full.txt, optional llms-ctx.txt, and fallback JSON) for any GitHub repository. The generator wraps DSPy analyzers, manages LM Studio model lifecycle with the official Python SDK, and guarantees output even when the primary language model cannot respond.

[!NOTE] The pipeline validates curated links, detects default branches automatically, and writes artifacts to artifacts/<owner>/<repo>/.

Prerequisites

  • Python 3.10 or later
  • LM Studio server available locally (Developer tab → Start Server) or the CLI (lms server start --port 1234)
  • GitHub API token in GITHUB_ACCESS_TOKEN or GH_TOKEN
  • Optional: llms_txt when you want to produce llms-ctx.txt

[!WARNING] Install dependencies inside a virtual environment to avoid PEP 668 “externally managed environment” errors.

Install

Create a virtual environment

python3 -m venv .venv
source .venv/bin/activate

Install the package with developer extras

pip install -e '.[dev]'

Installing the editable package exposes the lmstxt CLI and the lmstxt-mcp server.

[!TIP] Keep the virtual environment active while running the CLI or tests so the SDK-based unload logic can import lmstudio.

Configure LM Studio

Load the CLI and start the server

npx lmstudio install-cli
lms server start --port 1234

The server must expose an OpenAI-compatible endpoint, commonly http://localhost:1234/v1.

Ensure the target model is downloaded

Open LM Studio, download the model (for example qwen/qwen3-4b-2507), and confirm it appears in the Server tab.

Quick start

Run the CLI against any GitHub repository:

lmstxt https://github.com/owner/repo \
  --model qwen/qwen3-4b-2507 \
  --api-base http://localhost:1234/v1 \
  --stamp

The command writes artifacts to artifacts/owner/repo/. Use --output-dir to override the destination.

Environment variables

Variable Description
LMSTUDIO_MODEL Default LM Studio model identifier
LMSTUDIO_BASE_URL Base URL such as http://localhost:1234/v1
LMSTUDIO_API_KEY API key for secured LM Studio deployments
OUTPUT_DIR Custom root directory for artifacts
ENABLE_CTX=1 Emit llms-ctx.txt using the optional llms_txt package

Generated artifacts

Artifact Purpose
*-llms.txt Primary documentation synthesized by DSPy or the fallback heuristic
*-llms-full.txt Expanded content fetched from curated GitHub links with 404 filtering
*-llms.json Fallback JSON following LLMS_JSON_SCHEMA (only when LM fallback triggers)
*-llms-ctx.txt Optional context file created when ENABLE_CTX=1 and llms_txt is installed

[!IMPORTANT] The pipeline always writes llms.txt and llms-full.txt, even when the language model call fails.

Model Context Protocol (MCP) Server

This package includes a FastMCP server that exposes the generator as an MCP tool and provides access to generated artifacts as resources.

Features

  • Asynchronous Processing: Tool calls return a run_id immediately while generation happens in the background.
  • Tools:
    • lmstxt_generate_llms_txt: Trigger llms.txt generation.
    • lmstxt_generate_llms_full: Generate llms-full.txt from an existing run.
    • lmstxt_generate_llms_ctx: Generate llms-ctx.txt (requires llms_txt).
    • lmstxt_list_runs: View recent generation history and status.
    • lmstxt_read_artifact: Read generated files with pagination support.
    • lmstxt_list_all_artifacts: List all persistent .txt artifacts on disk.
  • Resources:
    • Run-specific: lmstxt://runs/{run_id}/{artifact_name}
    • Persistent Directory: lmstxt://artifacts/{filename} (e.g., lmstxt://artifacts/owner/repo/repo-llms.txt)

Running the Server

# Default stdio transport
lmstxt-mcp

Client Configuration

Add to your MCP client config (e.g., claude_desktop_config.json or config.toml):

Claude Desktop / Cursor

{
  "mcpServers": {
    "lmstxt": {
      "command": "lmstxt-mcp",
      "env": {
        "GITHUB_ACCESS_TOKEN": "your_token",
        "LMSTUDIO_BASE_URL": "http://localhost:1234/v1"
      }
    }
  }
}

Codex / CLI (toml)

[mcp_servers.lmstxt]
command = "lmstxt-mcp"
startup_timeout_sec = 30
tool_timeout_sec = 30

[mcp_servers.lmstxt.env]
GITHUB_ACCESS_TOKEN = "your_token"
LMSTUDIO_BASE_URL = "http://localhost:1234/v1"

How it works

  1. Collect repository material – the GitHub client gathers the file tree, README, package files, repository visibility, and default branch.
  2. Prepare LM Studio – the manager confirms the requested model is loaded, auto-loading if necessary.
  3. Generate documentation – DSPy produces curated content; on LM failures the fallback serializer builds markdown and JSON directly.
  4. Assemble llms-full – curated links are re-fetched via raw GitHub URLs for public repos or authenticated API calls for private ones, with validation to remove dead links.
  5. Unload models safely – the workflow first uses the official lmstudio SDK (model.unload() or list_loaded_models), then falls back to HTTP and CLI unload requests.

Project layout

  • src/lms_llmsTxt/ – core generation library, DSPy analyzers, and LM Studio helpers.
  • src/lms_llmsTxt_mcp/ – MCP server implementation, asynchronous worker, and resource providers.
  • tests/ – pytest coverage for the generator pipeline and MCP server.
  • artifacts/ – sample outputs from previous runs.

Verify your setup

source .venv/bin/activate
python -m pytest

All tests should pass, confirming URL validation, fallback handling, and MCP resource exposure.

Troubleshooting

[!WARNING] If pip install -e .[dev] fails with build tool errors, ensure cmake and necessary compilers are installed.

[!TIP] If the MCP server times out during generation, check lmstxt_list_runs to see if the background task is still processing. The lmstxt_generate_* tools return immediately to avoid client timeouts.

MCP Inspector

npx @modelcontextprotocol/inspector --config ./inspector.config.json --server lmstxt

Use the payloads in docs/mcp-inspector-payloads.md to verify specific tool behaviors.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lms_llmstxt-0.1.2.tar.gz (58.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lms_llmstxt-0.1.2-py3-none-any.whl (37.5 kB view details)

Uploaded Python 3

File details

Details for the file lms_llmstxt-0.1.2.tar.gz.

File metadata

  • Download URL: lms_llmstxt-0.1.2.tar.gz
  • Upload date:
  • Size: 58.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for lms_llmstxt-0.1.2.tar.gz
Algorithm Hash digest
SHA256 ab1bfe71d05d31c635ee5ca634de31f16f6cf99010ddf8b1f229c0c7bf22f0fe
MD5 a3d3eb36b617b23f870f47bd88a12146
BLAKE2b-256 3f317482175897e59df72ffc544b6b874ecfa9e86612e93a46a00bf0e8217378

See more details on using hashes here.

File details

Details for the file lms_llmstxt-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: lms_llmstxt-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 37.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for lms_llmstxt-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e9f650f5a4e76f57b460bb22c59c3ad762c166e123d06c2c8ea488c1a6b4f7f6
MD5 b43a44db0aee98a5d75e95f9983cceaf
BLAKE2b-256 79ed8096eb9d9b8f23a8d05298a10a91305fcd1f2d86023f1823d96fa402cb68

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page