Simple marketing automation framework
Project description
Pynions
A lean open-source Python framework for building AI-powered automation workflows that run on your machine. Built for marketers who want to automate research, monitoring, and content tasks without cloud dependencies or complex setups.
Think of it as Zapier/n8n but for local machines, designed specifically for marketing workflows.
What is Pynions?
Pynions helps marketers automate:
- Content research and analysis
- SERP monitoring and tracking
- Content extraction and processing
- AI-powered content generation
- Marketing workflow automation
Key Features
- Start small, ship fast
- Easy API connections to your existing tools
- AI-first but not AI-only
- Zero bloat, minimal dependencies
- Built for real marketing workflows
- Quick to prototype and iterate
- Local-first, no cloud dependencies
Technology Stack
- Python for all code
- Pytest for testing
- LiteLLM for unified LLM access
- Jina AI for content extraction
- Serper for SERP analysis
- Playwright for web automation
- dotenv for configuration
- httpx for HTTP requests
Quick Start
# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate
# Install Pynions
pip install .
# The installer will automatically:
# 1. Create .env from .env.example
# 2. Create pynions.json from pynions.example.json
# Add your API keys to .env
nano .env
Example Workflow
import asyncio
from pynions.core import Workflow, WorkflowStep
from pynions.plugins import SerperWebSearch, JinaAIReader
from pynions.core.config import load_config
async def main():
# Load configuration (automatically reads from root .env and pynions.json)
config = load_config()
# Initialize plugins
serper = SerperWebSearch() # Automatically uses API key from .env
jina = JinaAIReader() # Automatically uses API key from .env
# Create workflow
workflow = Workflow(
name="content_research",
description="Research and analyze content"
)
# Add steps
workflow.add_step(WorkflowStep(
plugin=serper,
name="search",
description="Search for relevant content"
))
workflow.add_step(WorkflowStep(
plugin=jina,
name="extract",
description="Extract clean content"
))
# Execute workflow
results = await workflow.execute({
"query": "marketing automation trends 2024"
})
# Save results
save_result(
content=results,
project_name="trends_research",
status="research"
)
if __name__ == "__main__":
asyncio.run(main())
Built-in Plugins
- SerperWebSearch: Google SERP data extraction using Serper.dev API
- JinaAIReader: Clean content extraction from web pages
- LiteLLMPlugin: Unified access to various LLM APIs
- FraseAPI: NLP-powered content analysis and metrics extraction
- PlaywrightPlugin: Web scraping and automation
- StatsPlugin: Track and display request statistics
- More plugins coming soon!
Documentation
- Project Structure
- Installation Guide
- Configuration Guide
- Plugin Development
- Workflow Creation
- Debugging Guide
Requirements
- Python 3.8 or higher
- pip and venv
- Required API keys:
- OpenAI API key
- Serper dev API key
- Perplexity API key (optional)
Configuration
Environment Variables (.env)
Required:
OPENAI_API_KEY
: Your OpenAI API key
Optional:
SERPER_API_KEY
: For search functionalityANTHROPIC_API_KEY
: For Claude modelsJINA_API_KEY
: For embeddings
Application Config (pynions.json)
See pynions.example.json for all available options.
Philosophy
- Use the "Don't Repeat Yourself" (DRY) principle
- Smart and safe defaults
- OpenAI's "gpt-4o-mini" is the default LLM
- Serper is the default search tool
- Perplexity is the default research tool
- No AI-only, always human in the loop
- Minimal dependencies
- No cloud dependencies
- All tools are local
- No need to sign up for anything (except for OpenAI API key, Serper dev API key, and Perplexity API key (optional))
- No proprietary formats
- No tracking
- No telemetry
- No bullshit
Common Issues
- Module not found errors
pip install -r requirements.txt
- API Key errors
- Check if
.env
file exists - Verify API keys are correct
- Remove quotes from API keys in
.env
- Permission errors
chmod 755 data
Contributing
See Project Structure for:
- Code organization
- Testing requirements
- Documentation standards
License
MIT License - see LICENSE for details
Support
If you encounter issues:
- Check the Debugging Guide
- Review relevant documentation sections
- Test components in isolation
- Use provided debugging tools
- Check common issues section
Credits
Standing on the shoulders of the open-source giants, built with ☕️ and dedication by a marketer who codes.
Workers
Workers are standalone task executors that combine multiple plugins for specific data extraction needs. Perfect for automated research and monitoring tasks.
Available Workers
- PricingResearchWorker: Extracts structured pricing data from any SaaS website
from pynions.workers import PricingResearchWorker async def analyze_pricing(): worker = PricingResearchWorker() result = await worker.execute({"domain": "example.com"}) print(json.dumps(result, indent=2))
Features
- Task-specific implementations
- Automated data extraction
- Structured output
- Plugin integration
- Efficient processing
See Workers Documentation for more details.