Skip to main content

Set up a modern REST API by running one command.

Project description

FastAPI Gen

Create production-ready FastAPI applications in seconds

From simple APIs to LLM-enabled applications, all without build configuration.

Test Status PyPI version Python versions Downloads License

Quick Start

Get a fully functional FastAPI app running in 30 seconds:

# Recommended: using uvx (no installation needed)
uvx fastapi-gen my_app
cd my_app && make start

Or install with uv:

uv tool install fastapi-gen
fastapi-gen my_app
cd my_app && make start

Or use pip:

pip install fastapi-gen
fastapi-gen my_app
cd my_app && make start

That's it! Open http://localhost:8000/docs to see your OpenAPI documentation.

Platform Support: Works on macOS and Linux | Report Issues

Why FastAPI Gen?

Focus on Code - Skip boilerplate setup and start building Production Ready - Enterprise patterns and best practices built-in Testing Included - Real test coverage from day one Zero Config - Ready-to-run templates that just work


Templates Overview

Hello World - Perfect for Learning FastAPI

Best for: Learning FastAPI fundamentals and starting new projects

Key Features:

  • REST API fundamentals with complete CRUD
  • Configuration management (pydantic-settings & dotenv)
  • Dependency injection and clean architecture
  • Background tasks and exception handling
  • Input validation and health monitoring
  • Complete test coverage

View Details →

Advanced - Enterprise Production Template

Best for: Production applications with enterprise features

Key Features:

  • JWT authentication with registration and login
  • Database integration with SQLAlchemy 2.0 async
  • Rate limiting and caching system
  • WebSocket support and file upload
  • Enhanced security and CORS configuration
  • Full test suite

View Details →

NLP - Comprehensive AI Language Processing

Best for: AI applications with natural language processing

Key Features:

  • 8 NLP capabilities: summarization, NER, generation, QA, embeddings, sentiment, classification, similarity
  • Production architecture with startup model loading
  • Smart configuration and device auto-detection
  • Performance optimized with model caching
  • Real AI testing with actual inference

View Details →

LangChain - Modern LLM Integration

Best for: Applications using LangChain for LLM workflows

Key Features:

  • Optimized loading with startup caching
  • Modern LangChain patterns and best practices
  • Smart config with auto device detection
  • Production ready with health checks
  • Dual endpoints: text generation and question answering

View Details →

Llama - Local LLM Powerhouse

Best for: Local LLM inference with llama-cpp-python

Key Features:

  • Local LLM focus optimized for Gemma/Llama GGUF models
  • GPU acceleration with auto detection
  • Advanced config for context windows and threading
  • Production ready with lifecycle management
  • Easy setup with auto model download

Requirements: ~4GB model download + 4GB+ RAM

View Details →


Template Comparison

Template Best For Complexity AI/ML Database Auth
Hello World Learning, Simple APIs Basic No No No
Advanced Production Apps Medium No Yes Yes
NLP AI Text Processing Advanced Yes No No
LangChain LLM Workflows Advanced Yes No No
Llama Local LLM Advanced Yes No No

What You Get Out of the Box

Zero ConfigurationProduction PatternsComplete TestingCode QualityAuto DocumentationDeployment Ready

Focus on Your Code, Not Setup

All dependencies (FastAPI, Pydantic, Pytest, etc.) are preconfigured. Just create and run:

fastapi-gen my_app   # Create
cd my_app            # Enter  
make start           # Run!

Every Template Includes:

  • Ready-to-run development environment
  • Industry-standard project structure
  • Comprehensive test suites with examples
  • Ruff linting and formatting
  • Auto-generated OpenAPI documentation
  • Makefile with common development commands

Installation & Usage

You'll need to have Python 3.11+ or later version on your local development machine. We recommend using the latest version. You can use uv for Python version management and project workflows.

Choose Your Template

# Default (hello_world)
uvx fastapi-gen my_app

# Or specify a template
uvx fastapi-gen my_app --template <template-name>

Available templates: hello_world, advanced, nlp, langchain, llama

Built-in Commands

Inside the newly created project, you can run:

make start

Runs the app in development mode.
Open http://localhost:8000/docs to view OpenAPI documentation in the browser.

The page will automatically reload if you make changes to the code.

make test

Runs tests.
By default, runs tests related to files changed since the last commit.

License

fastapi-gen is distributed under the terms of the MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastapi_gen-0.11.1.tar.gz (41.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastapi_gen-0.11.1-py3-none-any.whl (62.1 kB view details)

Uploaded Python 3

File details

Details for the file fastapi_gen-0.11.1.tar.gz.

File metadata

  • Download URL: fastapi_gen-0.11.1.tar.gz
  • Upload date:
  • Size: 41.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for fastapi_gen-0.11.1.tar.gz
Algorithm Hash digest
SHA256 3449f34a7eabd38adadab44fd0dd4e9a63c782331f757ada97b1b57c80c45454
MD5 194fd104ae3a042e63f1875984e321e3
BLAKE2b-256 a0de22279462e8a45d62863ca38045f7588a5081c438ca5adc4bb42f7cfacf69

See more details on using hashes here.

File details

Details for the file fastapi_gen-0.11.1-py3-none-any.whl.

File metadata

  • Download URL: fastapi_gen-0.11.1-py3-none-any.whl
  • Upload date:
  • Size: 62.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for fastapi_gen-0.11.1-py3-none-any.whl
Algorithm Hash digest
SHA256 2bca82c905a5f9fde76eec10273222abfb56ca7c6de00c24e18b9d9ed444b61d
MD5 de5f232f1b2b23950bbb67b2f7bdbb9e
BLAKE2b-256 0af0f908a014b5d02a2856d97eadca308c2828a306801ddb915031f207a85c77

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page