Skip to main content

A package for training AI agents to play retro games using natural language

Project description

🎮 Casino of Life

A revolutionary framework for training AI agents in retro fighting games using natural language interactions. Casino of Life combines reinforcement learning with natural language processing to create an intuitive interface for training game-playing AI agents.

🌟 Features

Natural Language Training Interface

  • Train AI agents using natural conversations
  • Explain strategies in plain English
  • Get real-time feedback on training progress
  • Interactive chat with CaballoLoko, your AI training assistant

Supported Games

  • Mortal Kombat II (Genesis)
  • Street Fighter II (Coming Soon)
  • More fighting games to be added

Advanced Training Capabilities

  • Multiple training strategies (Aggressive, Defensive, Balanced)
  • Various learning policies (PPO, A2C, DQN, MLP)
  • Custom reward functions
  • Save and load training states
  • Real-time training visualization

🚀 Quick Start

Installation

pip install casino-of-life

Basic Usage

from casino_of_life.agents import DynamicAgent, CaballoLoko
from casino_of_life.environment import RetroEnv

#Initialize CaballoLoko for training guidance
caballo_loko = CaballoLoko()
response = caballo.chat("Train Liu Kang to be aggressive with fireballs")

#Create environment and agent

env = RetroEnv(
game='MortalKombatII-Genesis',
state='tournament',
players=2
)
agent = DynamicAgent(
env=env,
policy='PPO',
learning_rate=0.0003
)

Start training with natural language guidance

agent.train(
    timesteps=100000
)

🛠 Advanced Features

Flexible Reward System

from casino_of_life.reward_evaluators import (
BasicRewardEvaluator,
StageCompleteRewardEvaluator,
MultiObjectiveRewardEvaluator
)

###Create custom reward evaluator

reward_system = MultiObjectiveRewardEvaluator([
BasicRewardEvaluator(health_reward=1.0, damage_penalty=-1.0),
StageCompleteRewardEvaluator(stage_complete_reward=100.0)
])

Environment Configuration

from casino_of_life.environment import RetroEnv
env = RetroEnv(
game='MortalKombatII-Genesis',
state='tournament',
players=2
)

Advanced Training Control

from casino_of_life.agents import DynamicAgent
from casino_of_life.client_bridge import RewardEvaluatorManager

# Initialize reward manager
reward_manager = RewardEvaluatorManager()
reward_manager.register_evaluator("tournament", reward_system)

# Create dynamic agent with custom rewards
agent = DynamicAgent(
    env=env,
    reward_evaluator=reward_manager.get_evaluator("tournament"),
    frame_stack=4,
    learning_rate=0.0003
)

🎯 Use Cases

Game Developers

  • Test game balance
  • Create sophisticated AI opponents
  • Generate training data for game testing

AI Researchers

  • Experiment with reinforcement learning in complex environments
  • Study human-AI interaction through natural language
  • Develop and test new training strategies

Gaming Community

  • Create custom AI training scenarios
  • Share and compare training results
  • Contribute to the evolution of game AI and the Casino of Life framework.

🛠 Advanced Features

Flexible Reward System

from casino_of_life.reward_evaluators import (
    BasicRewardEvaluator,
    StageCompleteRewardEvaluator,
    MultiObjectiveRewardEvaluator
)

# Create custom reward evaluator
reward_system = MultiObjectiveRewardEvaluator([
    BasicRewardEvaluator(health_reward=1.0, damage_penalty=-1.0),
    StageCompleteRewardEvaluator(stage_complete_reward=100.0)
])

Environment Configuration

from casino_of_life.environment import RetroEnv

# Create custom environment with frame stacking
env = RetroEnv(
    game='MortalKombatII-Genesis',
    state='tournament',
    players=2,  # Support for multiplayer
)

Advanced Training Control

from casino_of_life.agents import DynamicAgent
from casino_of_life.client_bridge import RewardEvaluatorManager

# Initialize reward manager
reward_manager = RewardEvaluatorManager()
reward_manager.register_evaluator("tournament", reward_system)

# Create dynamic agent with custom rewards
agent = DynamicAgent(
    env=env,
    reward_evaluator=reward_manager.get_evaluator("tournament"),
    frame_stack=4,
    learning_rate=0.0003
)

🔧 Technical Details

Environment Features

  • Stochastic frame skipping for realistic gameplay
  • Configurable observation processing (84x84 grayscale)
  • 4-frame stacking for temporal information
  • Multi-player support (up to 2 players)
  • Automatic garbage collection for memory management

Reward System

  • Modular reward evaluators
  • Health-based reward calculation
  • Stage completion bonuses
  • Multi-objective reward combination
  • Custom reward scaling
  • Real-time reward adjustment

Training Pipeline

  • Integration with Stable-Baselines3
  • Support for multiple RL algorithms
  • Customizable training parameters
  • Progress tracking and checkpointing
  • Memory-efficient design

📊 Web Interface

Training Dashboard

  • Real-time training metrics
  • Agent management
  • Model versioning
  • Interactive chat
  • Training configuration

API Integration

from casino_of_life.web import TrainingServer

# Start training server
server = TrainingServer()
server.start()

WebSocket connection for real-time updates

@server.on_message
async def handle_training_request(message):
    training_id = await server.start_training(message)
    return {"training_id": training_id}

🤝 Contributing

We welcome contributions! See our Contributing Guide for details.

Development Setup

git clone https://github.com/Cimai-Decentralized-Games/casino-of-life.git
cd casino-of-life
pip install -r requirements.txt

📚 Documentation

Full documentation available at https://docs.cimai.biz

🔗 Links

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • G4F for GPT models and other Providers
  • Stable-Retro for game emulation
  • Stable-Baselines3 for RL implementations
  • The fighting game community for inspiration and support

Made with ❤️ by Cimai Decentralized Games

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

casino_of_life-0.2.0.tar.gz (34.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

casino_of_life-0.2.0-py3-none-any.whl (41.9 kB view details)

Uploaded Python 3

File details

Details for the file casino_of_life-0.2.0.tar.gz.

File metadata

  • Download URL: casino_of_life-0.2.0.tar.gz
  • Upload date:
  • Size: 34.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for casino_of_life-0.2.0.tar.gz
Algorithm Hash digest
SHA256 11a813e9b1458b356cb9b1024a178b0bbc3f3cb7d1fcc31aa447eb86cb505f2a
MD5 0d09ef0dac8ac5a36d82ef3d091476fa
BLAKE2b-256 01e89877bbf464981184225d4605d95f4887c5edcbeba185092f8b909c2b61c9

See more details on using hashes here.

File details

Details for the file casino_of_life-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: casino_of_life-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 41.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for casino_of_life-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 129132c036f7fda079f6abe870b504aeabb999232a6f467ed2e1f2b47d605955
MD5 fe61ec2bba0e76355300e014d8134bea
BLAKE2b-256 21675ae241ec219290cd182a596209e10e5f17a06d428263a0e79f98aeb1edf8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page