Skip to main content

The simplest Python client for free access to top-tier AI models (GPT-4.1 Nano, DeepSeek, Gemini Flash, Claude Haiku)

Project description

FreeLLM

The simplest Python client for free access to top-tier AI models via public endpoint

freellm is a lightweight, easy-to-use Python package that gives you instant access to powerful models like GPT-4.1 Nano, DeepSeek, Gemini Flash Lite, and Claude 3 Haikucompletely free, no API key, no registration required.

It works by communicating directly with the public web interface, delivering high-quality responses with perfect formatting and minimal setup.

Features

  • Zero setup — no accounts, no keys
  • Simple .ask("your message") interface
  • Four powerful models: gpt (default), deepseek, google, claude
  • Optional conversation memory (sends full history when limit is enabled)
  • Per-conversation message limit with automatic reset
  • Streaming support (token-by-token output)
  • Perfect handling of newlines and spacing (no stuck words or visible \n)
  • Clean and intuitive CLI
  • Minimal dependencies (only requests)

Installation

pip install freellm

Requires Python 3.8+

Quick Start

Programmatic Use

from freellm import FreeLLM

# One-shot query (GPT-4.1 Nano by default)
print(FreeLLM().ask("Tell me a joke"))

# Use DeepSeek
print(FreeLLM(model="deepseek").ask("Explain quantum computing in simple terms"))

# With memory + limit
bot = FreeLLM(model="claude", limit=20)
bot.ask("My name is Alice")
print(bot.ask("What is my name?"))

Interactive Chat (CLI)

freellm                    # GPT-4.1 Nano, no memory
freellm --model deepseek   # Use DeepSeek
freellm --model google     # Gemini 2.0 Flash Lite
freellm --model claude     # Claude 3 Haiku
freellm --limit 15         # Enable memory (up to 15 user messages)
freellm --stream           # Token-by-token streaming
freellm --model deepseek --limit 20 --stream  # All features combined
freellm "Hello, who are you?"  # One-shot message

Usage Examples

# Persistent chat with Claude
bot = FreeLLM(model="claude", limit=10)
bot.ask("Explain how neural networks work")
bot.ask("Now give a real-world analogy")
bot.ask("Make it even simpler for a child")
# Quick stateless queries with different models
questions = ["Capital of Japan?", "Best way to learn Python?", "Write a haiku about rain"]
models = ["gpt", "deepseek", "google"]

for q, m in zip(questions, models):
    print(f"[{m.upper()}]: {FreeLLM(model=m).ask(q)}\n")

CLI Options

freellm --help
usage: freellm [-h] [--model {gpt,deepseek,google,claude}] [--limit LIMIT] [--stream] [message]

FreeLLM - Free access to DeepSeek, Gemini, Claude & GPT 

positional arguments:
  message               Send a single message and exit

options:
  -h, --help            show this help message
  --model {gpt,deepseek,google,claude}
                        Model: gpt (default), deepseek, google, claude
  --limit LIMIT         Enable memory: max user messages before conversation reset
  --stream              Show response token-by-token (streaming)

Important Note on Memory

The underlying service is a free public endpoint and does not officially store conversation state.

When you set --limit or limit=N, FreeLLM sends the full conversation history with every request — this provides the best possible context retention.

Memory works reliably for short-to-medium conversations (up to ~20–30 messages depending on length) and may vary slightly with server load.

Author

IMApurbo
GitHub: @IMApurbo

License

MIT License


Enjoy frontier-level AI models for free — no barriers, no costs! 🚀
Made with ❤️ by IMApurbo

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

freellm-1.0.1-py3-none-any.whl (6.7 kB view details)

Uploaded Python 3

File details

Details for the file freellm-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: freellm-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 6.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for freellm-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b0d2baeef05f58ee4a19e045d2ae3ceba863767b2c27233355603bc9491dd879
MD5 df87a1613ede5c3e5170b393f874c814
BLAKE2b-256 a5605bc3373dc631b220adbc2c6b746dad07dcb27ca400f5779f3769612f8b46

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page