Skip to main content

Interact with AI without API key

Project description

License Python version PyPi Black Website status Python Package flow Downloads Downloads Latest release release date wakatime

python-tgpt

>>> import pytgpt.phind as phind
>>> bot = phind.PHIND()
>>> bot.chat('hello there')
'Hello! How can I assist you today?'
from pytgpt.imager import Imager
img = Imager()
generated_images = img.generate(prompt="Cyberpunk", amount=3, stream=True)
img.save(generated_images)

This project enables seamless interaction with over 45 free LLM providers without requiring an API Key and generating images as well.

The name python-tgpt draws inspiration from its parent project tgpt, which operates on Golang. Through this Python adaptation, users can effortlessly engage with a number of free LLMs available, fostering a smoother AI interaction experience.

Features

  • 🐍 Python package
  • 🌐 FastAPI for web integration
  • ⌨️ Command-line interface
  • 🧠 Multiple LLM providers - 45+
  • 🌊 Stream and non-stream response
  • 🚀 Ready to use (No API key required)
  • 🎯 Customizable script generation and execution
  • 🔌 Offline support for Large Language Models
  • 🎨 Image generation capabilities
  • 🎤 Text-to-audio conversion capabilities
  • ⛓️ Chained requests via proxy
  • 🗨️ Enhanced conversational chat experience
  • 💾 Capability to save prompts and responses (Conversation)
  • 🔄 Ability to load previous conversations
  • 🚀 Pass awesome-chatgpt prompts easily
  • 🤖 Telegram bot - interface
  • 🔄 Asynchronous support for all major operations.

Providers

These are simply the hosts of the LLMs, which include:

  1. Leo - Brave
  2. Koboldai
  3. OpenGPTs
  4. OpenAI (API key required)
  5. WebChatGPT - OpenAI (Session ID required)
  6. Gemini - Google (Session ID required)
  7. Phind
  8. Llama2
  9. Blackboxai
  10. gpt4all (Offline)
  11. Poe - Poe|Quora (Session ID required)
  12. Groq (API Key required)
  13. Perplexity
  14. YepChat
  15. Novita (API key required)

41+ providers proudly offered by gpt4free.

  • To list working providers run:
    $ pytgpt gpt4free test -y
    

Prerequisites

Installation and Usage

Installation

Download binaries for your system from here.

Alternatively, you can install non-binaries. (Recommended)

  1. Developers:

    pip install --upgrade python-tgpt
    
  2. Commandline:

    pip install --upgrade "python-tgpt[cli]"
    
  3. Full installation:

    pip install  --upgrade "python-tgpt[all]"
    

pip install -U "python-tgt[api]" will install REST API dependencies.

Termux extras

  1. Developers:

    pip install --upgrade "python-tgpt[termux]"
    
  2. Commandline:

    pip install --upgrade "python-tgpt[termux-cli]"
    
  3. Full installation:

    pip install  --upgrade "python-tgpt[termux-all]"
    

pip install -U "python-tgt[termux-api]" will install REST API dependencies

Usage

This package offers a convenient command-line interface.

[!NOTE] phind is the default provider.

  • For a quick response:

    python -m pytgpt generate "<Your prompt>"
    
  • For interactive mode:

    python -m pytgpt interactive "<Kickoff prompt (though not mandatory)>"
    

Make use of flag --provider followed by the provider name of your choice. e.g --provider koboldai

To list all providers offered by gpt4free, use following commands: pytgpt gpt4free list providers

You can also simply use pytgpt instead of python -m pytgpt.

Starting from version 0.2.7, running $ pytgpt without any other command or option will automatically enter the interactive mode. Otherwise, you'll need to explicitly declare the desired action, for example, by running $ pytgpt generate.

Developer Docs

  1. Generate a quick response
from pytgpt.leo import LEO
bot = LEO()
resp = bot.chat('<Your prompt>')
print(resp)
# Output : How may I help you.
  1. Get back whole response
from pytgpt.leo import LEO
bot = LEO()
resp = bot.ask('<Your Prompt')
print(resp)
# Output
"""
{'completion': "I'm so excited to share with you the incredible experiences...", 'stop_reason': None, 'truncated': False, 'stop': None, 'model': 'llama-2-13b-chat', 'log_id': 'cmpl-3NmRt5A5Djqo2jXtXLBwJ2', 'exception': None}
"""

Stream Response

Just add parameter stream with value true.

  1. Text Generated only
from pytgpt.leo import LEO
bot = LEO()
resp = bot.chat('<Your prompt>', stream=True)
for value in resp:
    print(value)
# output
"""
How may
How may I help 
How may I help you
How may I help you today?
"""
  1. Whole Response
from pytgpt.leo import LEO
bot = LEO()
resp = bot.ask('<Your Prompt>', stream=True)
for value in resp:
    print(value)
# Output
"""
{'completion': "I'm so", 'stop_reason': None, 'truncated': False, 'stop': None, 'model': 'llama-2-13b-chat', 'log_id': 'cmpl-3NmRt5A5Djqo2jXtXLBwxx', 'exception': None}

{'completion': "I'm so excited to share with.", 'stop_reason': None, 'truncated': False, 'stop': None, 'model': 'llama-2-13b-chat', 'log_id': 'cmpl-3NmRt5A5Djqo2jXtXLBwxx', 'exception': None}

{'completion': "I'm so excited to share with you the incredible ", 'stop_reason': None, 'truncated': False, 'stop': None, 'model': 'llama-2-13b-chat', 'log_id': 'cmpl-3NmRt5A5Djqo2jXtXLBwxx', 'exception': None}

{'completion': "I'm so excited to share with you the incredible experiences...", 'stop_reason': None, 'truncated': False, 'stop': None, 'model': 'llama-2-13b-chat', 'log_id': 'cmpl-3NmRt5A5Djqo2jXtXLBwxx', 'exception': None}
"""
Auto - *(selects any working provider)*
import pytgpt.auto import auto
bot = auto.AUTO()
print(bot.chat("<Your-prompt>"))
Openai
import pytgpt.openai as openai
bot = openai.OPENAI("<OPENAI-API-KEY>")
print(bot.chat("<Your-prompt>"))
Koboldai
import pytgpt.koboldai as koboldai
bot = koboldai.KOBOLDAI()
print(bot.chat("<Your-prompt>"))
Opengpt
import pytgpt.opengpt as opengpt
bot = opengpt.OPENGPT()
print(bot.chat("<Your-prompt>"))
phind
import pytgpt.phind as phind
bot = phind.PHIND()
print(bot.chat("<Your-prompt>"))
Gpt4free providers
import pytgpt.gpt4free as gpt4free
bot = gpt4free.GPT4FREE(provider="Koala")
print(bot.chat("<Your-prompt>"))
Novita
import pytgpt.novita as novita
bot = novita.NOVITA("<NOVITA-API-KEY>")
print(bot.chat("<Your-prompt>"))

Asynchronous

Version 0.7.0 introduces asynchronous implementation to almost all providers except a few such as perplexity & gemini, which relies on other libraries which lacks such implementation.

To make it easier, you just have to prefix Async to the common synchronous class name. For instance OPENGPT will be accessed as AsyncOPENGPT:

Streaming Whole ai response.

import asyncio
from pytgpt.phind import AsyncPHIND

async def main():
    async_ask = await AsyncPHIND(False).ask(
        "Critique that python is cool.",
        stream=True
    )
    async for streaming_response in async_ask:
        print(
            streaming_response
        )

asyncio.run(
    main()
)

Streaming just the text

import asyncio
from pytgpt.phind import AsyncPHIND

async def main():
    async_ask = await AsyncPHIND(False).chat(
        "Critique that python is cool.",
        stream=True
    )
    async for streaming_text in async_ask:
        print(
            streaming_text
        )

asyncio.run(
    main()
)

To obtain more tailored responses, consider utilizing optimizers using the optimizer parameter. Its values can be set to either code or system_command.

from pytgpt.leo import LEO
bot = LEO()
resp = bot.ask('<Your Prompt>', optimizer='code')
print(resp)

[!IMPORTANT] Commencing from v0.1.0, the default mode of interaction is conversational. This mode enhances the interactive experience, offering better control over the chat history. By associating previous prompts and responses, it tailors conversations for a more engaging experience.

You can still disable the mode:

bot = koboldai.KOBOLDAI(is_conversation=False)

Utilize the --disable-conversation flag in the console to achieve the same functionality.

[!CAUTION] Bard autohandles context due to the obvious reason; the is_conversation parameter is not necessary at all hence not required when initializing the class. Also be informed that majority of providers offered by gpt4free requires Google Chrome inorder to function.

Image Generation

This has been made possible by pollinations.ai.

$ pytgpt imager "<prompt>"
# e.g pytgpt imager "Coding bot"
Developers
from pytgpt.imager import Imager

img = Imager()

generated_img = img.generate('Coding bot') # [bytes]

img.save(generated_img)
Download Multiple Images
from pytgpt.imager import Imager

img = Imager()

img_generator = img.generate('Coding bot', amount=3, stream=True)

img.save(img_generator)

# RAM friendly

Using Prodia provider

from pytgpt.imager import Prodia

img = Prodia()

img_generator = img.generate('Coding bot', amount=3, stream=True)

img.save(img_generator)

Advanced Usage of Placeholders

The generate functionality has been enhanced starting from v0.3.0 to enable comprehensive utilization of the --with-copied option and support for accepting piped inputs. This improvement introduces placeholders, offering dynamic values for more versatile interactions.

Placeholder Represents
{{stream}} The piped input
{{copied}} The last copied text

This feature is particularly beneficial for intricate operations. For example:

$ git diff | pytgpt generate "Here is a diff file: {{stream}} Make a concise commit message from it, aligning with my commit message history: {{copied}}" --new

In this illustration, {{stream}} denotes the result of the $ git diff operation, while {{copied}} signifies the content copied from the output of the $ git log command.

Awesome Prompts

These prompts are designed to guide the AI's behavior or responses in a particular direction, encouraging it to exhibit certain characteristics or behaviors. The term "awesome-prompt" is not a formal term in AI or machine learning literature, but it encapsulates the idea of crafting prompts that are effective in achieving desired outcomes. Let's say you want it to behave like a Linux Terminal, PHP Interpreter, or just to JAIL BREAK.

Instances :

$ pytgpt interactive --awesome-prompt "Linux Terminal"
# Act like a Linux Terminal

$ pytgpt interactive -ap DAN
# Jailbreak

[!NOTE] Awesome prompts are alternative to --intro. Run $ pytgpt awesome whole to list available prompts (200+). Run $ pytgpt awesome --help for more info.

Introducing RawDog

RawDog is a masterpiece feature that exploits the versatile capabilities of Python to command and control your system as per your needs. You can literally do anything with it, since it generates and executes python codes, driven by your prompts! To have a bite of rawdog simply append the flag --rawdog shortform -rd in generate/interactive mode. This introduces a never seen-before feature in the tgpt ecosystem. Thanks to AbanteAI/rawdog for the idea.

This can be useful in some ways. For instance :

$ pytgpt generate -n -q "Visualize the disk usage using pie chart" --rawdog

This will pop up a window showing system disk usage as shown below.

Passing Environment Variables

Pytgpt v0.4.6 introduces a convention way of taking variables from the environment. To achieve that, set the environment variables in your operating system or script with prefix PYTGPT_ followed by the option name in uppercase, replacing dashes with underscores.

For example, for the option --provider, you would set an environment variable PYTGPT_PROVIDER to provide a default value for that option. Same case applies to boolean flags such as --rawdog whose environment variable will be PYTGPT_RAWDOG with value being either true/false. Finally, --awesome-prompt will take the environment variable PYTGPT_AWESOME_PROMPT.

[!NOTE] This is NOT limited to any command

The environment variables can be overridden by explicitly declaring new value.

[!TIP] Save the variables in a .env file in your current directory or export them in your ~/.zshrc file. To load previous conversations from a .txt file, use the -fp or --filepath flag. If no flag is passed, the default one will be used. To load context from a file without altering its content, use the --retain-file flag.

Dynamic Provider & Further Interfaces

Version 0.4.6 also introduces dynamic provider called g4fauto, which represents the fastest working g4f-based provider.

[!TIP] To launch web interface for g4f-based providers simply run $ pytgpt gpt4free gui. $ pytgpt api run will start the REST-API. Access docs and redoc at /docs and /redoc respectively. To launch the web interface for g4f-based providers, execute the following command in your terminal:

$ pytgpt gpt4free gui

This command initializes the Web-user interface for interacting with g4f-based providers.

To start the REST-API:

$ pytgpt api run

This command starts the RESTful API server, enabling you to interact with the service programmatically.

For accessing the documentation and redoc, navigate to the following paths in your web browser:

  • Documentation: /docs
  • ReDoc: /redoc

Speech Synthesis

To enable speech synthesis of responses, ensure you have either the VLC player installed on your system or, if you are a Termux user, the Termux:API package.

To activate speech synthesis, use the --talk-to-me flag or its shorthand -ttm when running your commands. For example:

$ pytgpt generate "Generate an ogre story" --talk-to-me

or

$ pytgpt interactive -ttm

This flag instructs the system to audiolize the ai responses and then play them, enhancing the user experience by providing auditory feedback.

Version 0.6.4 introduces another dynamic provider, auto, which denotes the working provider overall. This relieves you of the workload of manually checking a working provider each time you fire up pytgpt. However, auto as a provider does not work so well with streaming responses, so probably you would need to sacrifice performance for the sake of reliability.

Telegram Bot

If you're not satisfied with the existing interfaces, pytgpt-bot could be the solution you're seeking. This bot is designed to enhance your experience by offering a wide range of functionalities. Whether you're interested in engaging in AI-driven conversations, creating images and audio from text, or exploring other innovative features, pytgpt-bot is equipped to meet your needs.

The bot is maintained as a separate project so you just have to execute a command to get it installed :

$ pip install pytgpt-bot

Usage : pytgpt bot run <bot-api-token>

Or you can simply interact with the one running now as @pytgpt-bot

For more usage info run $ pytgpt --help

Usage: pytgpt [OPTIONS] COMMAND [ARGS]...

Options:
  -v, --version  Show the version and exit.
  -h, --help     Show this message and exit.

Commands:
  api          FastAPI control endpoint
  awesome      Perform CRUD operations on awesome-prompts
  bot          Telegram bot interface control
  generate     Generate a quick response with AI
  gpt4free     Discover gpt4free models, providers etc
  imager       Generate images with pollinations.ai
  interactive  Chat with AI interactively (Default)
  utils        Utility endpoint for pytgpt
  webchatgpt   Reverse Engineered ChatGPT Web-Version

API Health Status

No. API Status
1. On-render cron-job

CHANGELOG

Acknowledgements

  1. tgpt
  2. gpt4free

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

python_tgpt-0.7.8.tar.gz (87.8 kB view details)

Uploaded Source

Built Distribution

python_tgpt-0.7.8-py3-none-any.whl (104.7 kB view details)

Uploaded Python 3

File details

Details for the file python_tgpt-0.7.8.tar.gz.

File metadata

  • Download URL: python_tgpt-0.7.8.tar.gz
  • Upload date:
  • Size: 87.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for python_tgpt-0.7.8.tar.gz
Algorithm Hash digest
SHA256 d57e49f356153f357d0572d46512c6b5cc5a3459133ae3a1843f4d0a2b15e3cc
MD5 850427d6e80c450ad6be4b7e634fe654
BLAKE2b-256 dd66f644255b26755e1f4e7433df7ca3e9a69d2edc88d17a0797039f5f6658b8

See more details on using hashes here.

File details

Details for the file python_tgpt-0.7.8-py3-none-any.whl.

File metadata

  • Download URL: python_tgpt-0.7.8-py3-none-any.whl
  • Upload date:
  • Size: 104.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for python_tgpt-0.7.8-py3-none-any.whl
Algorithm Hash digest
SHA256 e79768ccf4d5c37ef39b578676d28887c2de9497ee86894d191dc8fe9f03157e
MD5 be8f8cfc1577cb5bbf44a368e60f4f46
BLAKE2b-256 c1fb0617802732bba148ce5b59e1fc630a77055c75c238e1ca1c977a17625b4c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page