Skip to main content

A seemless way to use HuggingFace Inference API with Autogen.

Project description

HFAutogen

Seemlessly use models provided by the HuggingFace Inference API with Autogen.

Introduction

HFAutogen bridges the gap between Hugging Face's powerful inference API and the convenience of AutoGen, providing a seamless integration for developers looking to leverage the best of both worlds. This software enables the use of Hugging Face models with AutoGen's automated code generation capabilities, making it easier to implement AI-powered features without the need for large computational power.

Table of Contents

Installation

pip install hfautogen

Examples

Example 1

In this example, we are importing the required functions to set up a user agent, an assistant agent, and initializing the chat between the two. We start the chat with the prompt given with _input.

from hfautogen import ModelAgent, UserAgent, InitChat

_input = input("Enter text or press Enter to load automated message.\n")
hf_key = "hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

user = UserAgent("user_proxy")
assistant = ModelAgent("assistant",
                        hf_key,
                        system_message="You are a friendly AI assistant.")

InitChat(user, assistant, _input)

Features

Free to use, extremely lightweight access to open source HuggingFace Models.

HFAutogen leverages the power of open-source technology by integrating HuggingFace's Inference API with Autogen. HFAutogen provides users with state-of-the-art AI capabilities at no cost. This feature means that developers and researchers can access a wide range of pre-trained models for natural language processing (NLP) found on the HuggingFace Model Hub without worrying about the financial overhead. The lightweight nature of AutoGen's access mechanism ensures that users can start experimenting and deploying AI features in their projects quickly and efficiently, without the need for extensive setup or high-end computational resources.

Automatic Code Execution

With HFAutogen, users benefit from an environment that automates the code execution process. This feature is inhereted from Autogen, while reducing the barrier for entry into the tool. Designed to streamline the development workflow, HFAutogen makes it easier and faster to test and deploy multi-agent communication systems than ever before. Automatic code execution means that once the user configures their project settings and inputs, HFAutogen takes care of the rest—compiling, running, and providing outputs without manual intervention. This automation reduces the potential for human error, speeds up the development cycle, and allows users to focus more on the strategic aspects of their projects.

Multi Agent Communication

Multi-agent communication is at the heart of AutoGen's capabilities, enabling different AI agents to interact with each other in a coordinated manner. This feature is crucial for developing complex systems where multiple agents need to share information, make decisions, or work together towards a common goal. HFAutogen facilitates this by providing a seamless communication framework between the user and Autogen

Fast Prototyping

HFAutogen is designed to accelerate the prototyping phase of project development, allowing users to quickly move from concept to a working prototype. This feature is particularly beneficial in the fast-paced world of technology, where speed to market can be a critical competitive advantage. With HFAutogen, developers can rapidly test hypotheses, iterate on designs, and refine their projects with minimal delay. The combination of easy access to powerful AI models, automatic code execution, and support for multi-agent communication means that prototyping with HFAutogen is not only fast but also highly effective, enabling users to explore innovative ideas and solutions with agility.

Usage

HFAutogen uses three objects that are useful to the user. ModelAgent(), UserAgent(), and InitChat()

ModelAgent(name, hf_key, hf_url, system_message, code_execution)

    name - _str_ required
        The name of the `ModelAgent()`

    hf_key - _str_ required
        The API Key obtained on HuggingFace.

    hf_url - _str_ optional
        _default_: ""https://api-inference.huggingface.co/models/mistralai/Mixtral-8x7B-Instruct-v0.1""
        The HuggingFace Inference API URL.

    system_message - _str_ optional
        _default:_ ""
        The contextual prompt for `ModelAgent()`

    code_execution - _dict_ optional
        _default:_ False
        A dictionary that contains a `work_dir` and `use_docker` entry:

        Ex:
           {"work_dir": "coding", "use_docker": False}

UserAgent(name, max_consecutive_auto_reply, code_dir, use_docker, system_message)

  - name - _str_ required
    The name of the `ModelAgent()`

  - max_consecutive_auto_reply - _int_ optional
    _default:_ 2
    The maximum number of consecutive automatic replies made by the `UserAgent()`

  - coding_dir - _str_ optional
    _default:_ "coding"
    The directory `UserAgent()` will use and operate out of.

  - user_docker - _bool_ optional
    _default:_ False
    If true, `UserAgent()` will use a docker.

  - system_message - _str_ optional
    _default:_ ""
    The contextual prompt for `UserAgent()`

InitChat(user, agent, _input)

  - user - _`UserAgent()`_ required
    A `UserAgent()` object

  - agent - _`ModeAgent()`_ required
    A `ModelAgent()` object

  - _input - _str_ required
    The initial input prompt.




Dependencies

pyautogen ==0.2.10

transformers =4.38.0




Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hfautogen.py-1.3.tar.gz (7.1 kB view details)

Uploaded Source

Built Distribution

hfautogen.py-1.3-py3-none-any.whl (7.5 kB view details)

Uploaded Python 3

File details

Details for the file hfautogen.py-1.3.tar.gz.

File metadata

  • Download URL: hfautogen.py-1.3.tar.gz
  • Upload date:
  • Size: 7.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.6

File hashes

Hashes for hfautogen.py-1.3.tar.gz
Algorithm Hash digest
SHA256 3334c47b0c12ec72ffbc2c86a75d1ef57ecc3d3601f67aec17388324c39fc2af
MD5 ff21aa035ed40141da57d4baad801770
BLAKE2b-256 c46ac2bc98e8badd5b5421a1c14edcca12c26776cf5670f8067123373ca44e92

See more details on using hashes here.

File details

Details for the file hfautogen.py-1.3-py3-none-any.whl.

File metadata

  • Download URL: hfautogen.py-1.3-py3-none-any.whl
  • Upload date:
  • Size: 7.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.6

File hashes

Hashes for hfautogen.py-1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 42f7c9e396666736198d30eb52ee66a4c6cc2303a562d2434225ff77e4ba36a6
MD5 3ec667150aedb1f10611f847b96aa11f
BLAKE2b-256 e0c5e06422a012f2ae71fb8136e67dffd359b7b7619a60b5eb6de8e5d0780b1f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page