Skip to main content

Standalone MCP server for LinkedIn post scheduling with a publisher daemon

Project description

PyPI License: MIT Python 3.10+

linkedin-mcp-scheduler

Schedule and manage LinkedIn posts through conversation with an AI agent. Posts publish reliably on time — locally or in a container.

User Story

You're chatting with your AI agent. You say:

"Schedule that post for tomorrow at 9am"

Done. Then later:

"Show me my queue"

You see everything — what's pending, what published, what failed. You say:

"Actually push that one to Thursday and change the second paragraph"

The agent edits it in-place and reschedules. No cancelling, no re-creating, no UUIDs to copy-paste.

"That failed post from yesterday — retry it now"

Retried. Published. You never opened LinkedIn.

This is what linkedin-mcp-scheduler provides: a standalone MCP server with a reliable publishing daemon, full CRUD on your post queue, and a conversational UX designed for AI agents to manage your LinkedIn presence.

Why a Separate Project

linkedin-mcp wraps the LinkedIn API as MCP tools — posting, engagement, auth. It has basic scheduling, but scheduling is a different problem:

  • It needs a daemon that runs continuously and publishes due posts. The API wrapper doesn't.
  • It needs persistent state (a database) with its own lifecycle. The API wrapper is stateless.
  • It needs to be containerizable — running in k8s with a volume-backed DB, HTTP transport, and env-based credentials. The API wrapper runs fine as a local stdio MCP.
  • The UX surface area is large enough to be its own product: queue management, editing, rescheduling, retries, media scheduling, recurring posts.

Architecture

┌──────────────────────────────────────┐
│  Container (or local)                │
│                                      │
│  MCP Server (HTTP/SSE)               │
│    - schedule, list, edit, cancel     │
│    - reschedule, retry, queue summary │
│     ↕ reads/writes                   │
│  SQLite DB  ← persistent volume      │
│     ↕ reads/writes                   │
│  Publisher Daemon (poll loop)         │
│    - publishes due posts via SDK      │
│    - retries failures                 │
│                                      │
└──────────────────────────────────────┘

Both processes share one SQLite database. In a container, the DB lives on a persistent volume. Locally, it defaults to ~/.linkedin-mcp-scheduler/scheduled.db.

Deployment Modes

  • Local: uv run linkedin-mcp-scheduler — stdio MCP for Claude Code, daemon as a background process or launchd job
  • Container: docker-compose with HTTP transport, volume-mounted DB, env-var credentials — ready to lift into k8s

MCP Tools

Core (from linkedin-mcp, to be extracted)

Tool Description
schedule_post Schedule a text post (+ optional URL) for future publication
list_scheduled_posts List scheduled posts, filterable by status
get_scheduled_post Get details of a specific scheduled post
cancel_scheduled_post Cancel a pending scheduled post

Planned

Tool Description
update_scheduled_post Edit content, URL, or visibility of a pending post in-place
reschedule_post Change the scheduled time of a pending post
retry_failed_post Retry a failed post, optionally at a new time
queue_summary Formatted overview: counts by status, next due, recent failures
schedule_post_with_image Schedule a post with an image attachment
schedule_post_with_document Schedule a post with a document attachment
schedule_post_with_poll Schedule a post with a poll

Status

Early stage — establishing scope, setting up the project, and creating issues for the build-out.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

linkedin_mcp_scheduler_ldraney-0.1.1.tar.gz (19.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

linkedin_mcp_scheduler_ldraney-0.1.1-py3-none-any.whl (13.2 kB view details)

Uploaded Python 3

File details

Details for the file linkedin_mcp_scheduler_ldraney-0.1.1.tar.gz.

File metadata

File hashes

Hashes for linkedin_mcp_scheduler_ldraney-0.1.1.tar.gz
Algorithm Hash digest
SHA256 0d1b29932c8550a82a6f9fceb287df509138998171ea0aee0bec4a4436d69b27
MD5 20288f5f63776cae5d46bf031ba33633
BLAKE2b-256 f560fa68d5e241b68c2d03d28969d1483951eb83b45d14e3be072808ba5b14bb

See more details on using hashes here.

Provenance

The following attestation bundles were made for linkedin_mcp_scheduler_ldraney-0.1.1.tar.gz:

Publisher: publish.yml on ldraney/linkedin-mcp-scheduler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file linkedin_mcp_scheduler_ldraney-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for linkedin_mcp_scheduler_ldraney-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3690df1cd191ccff95be04d7a6ce0144207ed6fe975b7eb4264da677e8d856be
MD5 03aa6521eab5c1447bd340177f7e1436
BLAKE2b-256 58b083cfc70967f514da90f3cc57ffe4c99e15e5cd714871b056402f302c5c5c

See more details on using hashes here.

Provenance

The following attestation bundles were made for linkedin_mcp_scheduler_ldraney-0.1.1-py3-none-any.whl:

Publisher: publish.yml on ldraney/linkedin-mcp-scheduler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page