Skip to main content

Penguin Tamer - AI-powered terminal assistant for Linux systems

Project description

Penguin Tamer

🐧 Penguin Tamer 🐧

Python Version License: MIT PyPI Version GitHub Stars

🐧 Tame your Linux terminal with AI power! Ask questions to ChatGPT, Deep Seek, Grok and many other large language models (LLM). Execute scripts and commands suggested by the neural network directly from the command line. Perfect for beginners in Linux and Windows administration.

🌍 Available in: English | Русский

pgram response1

Table of Contents

Install

curl -sSL https://raw.githubusercontent.com/Vivatist/penguin-tamer/main/install.sh | bash

Uninstall

pipx uninstall penguin-tamer

Description

Features

  • Quick AI queries — Get answers from large language models via the command line
  • No GUI — Communicate with your chosen AI in natural language and any locale: ai how to install Russian fonts?
  • Interactive dialog mode — Chat with AI in dialog mode with preserved conversation context
  • Code execution — Execute scripts and commands suggested by AI in the console
  • Friendly interface — Formatted output with syntax highlighting — just like you’re used to when working with neural networks
  • Multiple AI providers — Support for OpenAI, OpenRouter, DeepSeek, Anthropic and other popular providers
  • Multi-language support — En and Ru are available now. You can (help with translation) into other languages.

Quick Start

Try asking the assistant a question, for example pt who are you?. In a couple of seconds, the neural network will respond:

program response1

On first launch, the program uses a Microsoft-hosted model — DeepSeek-R1-Lite-Preview with a public token. This is not the best option since you may see a quota-exceeded message due to high traffic, but it’s fine for a test run.

For full operation, you need to obtain a personal token and add it to the selected model in the program settings.

[!NOTE] penguin-tamer can work with any neural network that supports API access. Today this includes almost all large language models (LLMs) on the market. How to add a new model.

Connecting to Neural Networks

penguin-tamer ships with several popular models pre-configured, such as DeepSeek, Grok 4 Fast, Qwen3 Coder. However, provider policies don’t allow full operation without authorization. You must obtain a personal token (API_KEY) from the provider’s website.

Getting a Token (API_KEY) and Connecting to a Pre-installed Model

We recommend the provider OpenRouter — simple registration and dozens of popular models available for free with a single token.

  • Register on the website
  • Get a token by clicking Create API key. Save it — OpenRouter will show it only once!
  • Add the token to penguin-tamer in the settings of the selected model
  • Make this model the current one

Done! Now the selected model will answer you in the console. You can connect any other model from this website in the same way.

[!NOTE] One OpenRouter token is valid for all models available from this provider.

A similar procedure applies to other providers, although with OpenRouter available, you may not need it.

Adding a New Model

To add a new model to penguin-tamer, including a local model or one from major providers, simply enter in penguin-tamer settings:

  • API_KEY (your personal token)
  • API_URL (API base URL)
  • model (model name)

You can find this information on the provider’s website in the API section.

Connection Example

Using the example of the free Meta: Llama 3.1 model listed on OpenRouter among dozens of other free models.

Open the model’s page and find the API section.

Among the connection examples, look for information similar to:

  • API_URL — for OpenRouter, this parameter is called base_url
  • model — listed as model

How to get API_KEY is described above.

Enter these values (without quotes) in penguin-tamer settings and set this model as current. Now Meta: Llama 3.1 will answer your questions.

Examples

Quick Query

# Simple question
ai kernel update script

Dialog Mode

Penguin Tamer always works in dialog mode, preserving the conversation context throughout the session.

You can start a dialog with an initial question:

pt what python version is installed?

Or without a question to begin an interactive session:

pt  # Enter

Running Code from AI Response

If the response contains code blocks — they are numbered. To run code, simply enter the block number in the console.

dialog mode

Security

[!WARNING] Never execute code suggested by the neural network if you’re not sure what it does!

Best Practices

  1. Review code before execution

    # Always check what AI suggests
    ai Delete all files from /tmp  # Don’t run this blindly!
    
  2. Use safe commands

    # Prefer these over destructive operations
    ai Show disk usage
    ai Show running processes
    

Configuration

Initial Setup

Run the setup mode to configure your AI provider:

pt -s

Supported AI Providers

  • OpenAI (GPT-3.5, GPT-4)
  • Anthropic (Claude)
  • OpenRouter (Multiple models)
  • Local models (Ollama, LM Studio)

And many others that support API access.

Configuration File

Settings are stored in:

  • Linux: ~/.config/penguin-tamer/config.yaml
  • Windows: %APPDATA%\penguin-tamer\config.yaml

Reset Settings

To restore defaults, delete the configuration file manually or run:

# For Linux
rm ~/.config/penguin-tamer/config.yaml
# For Windows
rm %APPDATA%\penguin-tamer\config.yaml

Contributing

I’ll be glad for any help!

Areas for Contribution

  • 🌍 Localization — Adding support for new languages (template), including README.md
  • 🤖 AI Providers — Integrating new AI providers
  • 🎨 UI/UX — Improving the configuration manager interface (yes, it’s not perfect)
  • 🔧 Tools — Creating additional utilities
  • 💡 Ideas — I welcome any ideas to improve and develop penguin-tamer. Join the discussion

Here’s how to get started:

Development Environment Setup

  1. Fork the repository

  2. Clone your fork:

    git clone https://github.com/your-username/penguin-tamer.git
    cd penguin-tamer
    
  3. Set up the development environment:

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    pip install -r requirements.txt
    pip install -e .
    
  4. Install git hooks (optional but recommended):

    make install-hooks        # Linux/Mac
    make.bat install-hooks    # Windows
    

    This will automatically run tests before commits and pushes.

Contribution Guidelines

  • 📝 Code Style: Follow PEP 8
  • 🧪 Testing: Add tests for new features (run python run_tests.py)
  • 🔍 Pre-commit: Tests run automatically before commits (or use git commit --no-verify to skip)
  • 📚 Documentation: Update README for new features
  • 🔄 Pull Requests: Use clear commit messages

For detailed information about testing and git hooks, see:

License

This project is licensed under the MIT License.

Contacts


Created with ❤️ for the Linux community

⭐ Star on GitHub

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

penguin_tamer-0.9.2.tar.gz (26.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

penguin_tamer-0.9.2-py3-none-any.whl (129.7 kB view details)

Uploaded Python 3

File details

Details for the file penguin_tamer-0.9.2.tar.gz.

File metadata

  • Download URL: penguin_tamer-0.9.2.tar.gz
  • Upload date:
  • Size: 26.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for penguin_tamer-0.9.2.tar.gz
Algorithm Hash digest
SHA256 e41c3bd56be26dc3ed014807a66e273528dfd284a14f17828c13e51f1ff48243
MD5 1ce654286fdca17d637d0a048ec3c2d3
BLAKE2b-256 39ac28f706b8afef208332fc8afef3c5a2e42135b6570c07375c1e1b769996ba

See more details on using hashes here.

File details

Details for the file penguin_tamer-0.9.2-py3-none-any.whl.

File metadata

  • Download URL: penguin_tamer-0.9.2-py3-none-any.whl
  • Upload date:
  • Size: 129.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for penguin_tamer-0.9.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4f83a9eed6873e67e18426bd558d638de4da94a5322ab6954b069d126349965d
MD5 2434beab2e363e68fa5c6631a44a22d3
BLAKE2b-256 e8cb301e57ffed88a787a37e691ec65d27ef091b280ca79101f3e5b758a369dc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page