Skip to main content

No project description provided

Project description

Shows the LLaMEA logo.

LLaMEA: Large Language Model Evolutionary Algorithm

PyPI version

Table of Contents

Introduction

LLaMEA (Large Language Model Evolutionary Algorithm) is an innovative framework that leverages the power of large language models (LLMs) such as GPT-4 for the automated generation and refinement of metaheuristic optimization algorithms. The framework utilizes a novel approach to evolve and optimize algorithms iteratively based on performance metrics and runtime evaluations without requiring extensive prior algorithmic knowledge. This makes LLaMEA an ideal tool for both research and practical applications in fields where optimization is crucial.

Features

LLaMEA framework

  • Automated Algorithm Generation: Automatically generates and refines algorithms using GPT models.
  • Performance Evaluation: Integrates with the IOHexperimenter for real-time performance feedback, guiding the evolutionary process to generate metaheuristic optimization algorithms.
  • Customizable Evolution Strategies: Supports configuration of evolutionary strategies to explore algorithmic design spaces effectively.
  • Extensible and Modular: Designed to be flexible, allowing users to integrate other models and evaluation tools.

Getting Started

Prerequisites

  • Python 3.8 or later
  • OpenAI API key for accessing GPT models

Installation

It is the easiest to use LLaMEA from the pypi package.

  pip install llamea

You can also install the package from source using Poetry.

  1. Clone the repository:
    git clone https://github.com/nikivanstein/LLaMEA.git
    cd LLaMEA
    
  2. Install the required dependencies via Poetry:
    poetry install
    

How to use

  1. Set up an OpenAI API key:

    • Obtain an API key from OpenAI.
    • Set the API key in your environment variables:
      export OPENAI_API_KEY='your_api_key_here'
      
  2. Running an Experiment

    To run an optimization experiment using LLaMEA:

    from llamea import LLaMEA
    
    # Define your evaluation function
    def your_evaluation_function(solution):
        # Implementation of your function
        # return feedback, quality score, error information
        return "feedback for LLM", 0.1, ""
    
    # Initialize LLaMEA with your API key and other parameters
    optimizer = LLaMEA(f=your_evaluation_function, api_key="your_api_key_here")
    
    # Run the optimizer
    best_solution, best_fitness = optimizer.run()
    print(f"Best Solution: {best_solution}, Fitness: {best_fitness}")
    

Contributing

Contributions to LLaMEA are welcome! Here are a few ways you can help:

  • Report Bugs: Use GitHub Issues to report bugs.
  • Feature Requests: Suggest new features or improvements.
  • Pull Requests: Submit PRs for bug fixes or feature additions.

Please refer to CONTRIBUTING.md for more details on contributing guidelines.

License

Distributed under the MIT License. See LICENSE for more information.

Citation

If you use LLaMEA in your research, please consider citing the associated paper:

@misc{vanstein2024llamea,
      title={LLaMEA: A Large Language Model Evolutionary Algorithm for Automatically Generating Metaheuristics}, 
      author={Niki van Stein and Thomas Bäck},
      year={2024},
      eprint={2405.20132},
      archivePrefix={arXiv},
      primaryClass={cs.NE}
}

For more details, please refer to the documentation and tutorials available in the repository.

flowchart LR
    A[Initialization] -->|Starting prompt| B{Stop? fa:fa-hand}
    B -->|No| C(Generate Algorithm - LLM )
    B --> |Yes| G{{Return best so far fa:fa-code}}
    C --> |fa:fa-code|D(Evaluate)
    D -->|errors, scores| E[Store session history fa:fa-database]
    E --> F(Construct Refinement Prompt)
    F --> B

CodeCov test coverage

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llamea-0.9.2.tar.gz (10.6 kB view details)

Uploaded Source

Built Distribution

llamea-0.9.2-py3-none-any.whl (10.0 kB view details)

Uploaded Python 3

File details

Details for the file llamea-0.9.2.tar.gz.

File metadata

  • Download URL: llamea-0.9.2.tar.gz
  • Upload date:
  • Size: 10.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for llamea-0.9.2.tar.gz
Algorithm Hash digest
SHA256 42b0f16a24a0e1d009ae9b7a6a35559af0867f11877b43adc68cb013e0f888ac
MD5 9e06401e4b3e9890691232ecbc046332
BLAKE2b-256 bb04cc7c38ab07ddaa6059cd31ab97d6bd6ca6f83bc72e9a48e22356ce3c4f88

See more details on using hashes here.

File details

Details for the file llamea-0.9.2-py3-none-any.whl.

File metadata

  • Download URL: llamea-0.9.2-py3-none-any.whl
  • Upload date:
  • Size: 10.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for llamea-0.9.2-py3-none-any.whl
Algorithm Hash digest
SHA256 aeac1d88f7596853e28b46c28160e5fdf9317e151758979c299f80afb09b8b57
MD5 a8f87bfeafd3e5d38b7fcbd37b8a3dbb
BLAKE2b-256 8095693e5baaeb40fd27991f5ebd50adbbf8dde81d5f699a450baa40c56a0a28

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page