LLM Roleplay: Simulating Human-Chatbot Interaction
Project description
LLM Roleplay: Simulating Human-Chatbot Interaction
Roleplay is a Python package that provides an easy method for generating goal-oriented, persona-based multi-turn dialogues, simulating diverse human-chatbot interactions. This repository contains the code and data for the LLM Roleplay method, as presented in the paper LLM Roleplay: Simulating Human-Chatbot Interaction. It includes all the experiment codes and the necessary data to replicate them, as described in the paper.
More About Roleplay
The LLM Roleplay (llm-roleplay) codebase is built upon the UrarTU framework (version 2). For detailed insights into its structure, please refer to the Getting Started Guide.
Installation
Getting started with llm-roleplay is a breeze! 💨 Just follow these steps to set up the necessary packages and create a local package called llm-roleplay
:
- Clone the repository:
git clone git@github.com:UKPLab/llm-roleplay.git
- Navigate to the project directory:
cd llm-roleplay
- Execute the magic command:
pip install .
🪄 After running the previous command, llm-roleplay
will install the required packages including the latest version of urartu
(>=2.0) and make it ready to use.
Plus, an alias will be created, allowing you to access llm-roleplay from any directory on your operating system effortlessly:
urartu --help
Now, to register llm-roleplay
under the corresponding name in urartu
we need to run the following command by providing the path where the module is located, for more info refere to UrarTU's documentation:
urartu register --name=llm_roleplay --path=PATH_TO_ROLEPLAY/llm_roleplay
After this you can run urartu -h
again to see the available modules under launch
command and make sure that llm_roleplay
is present there.
Exploring the Experiments
Before diving into using llm-roleplay
, let's set up Aim. This tool will track our experiment metadata and generated dialogues, storing them locally on our system.
Let's start the Aim server to store all the metadata and dialogues of our experiments. By default, it will run on port 53800
. Use this command to get it running:
aim server
Since we are running the Aim server on our local machine, we will use the address: aim://0.0.0.0:53800
. For remote tracking, refer to Track experiments with aim remote server.
To explore the wealth of metrics that Aim captures effortlessly, follow these steps:
- Navigate to the directory containing the
.aim
repository. - Run the command that sparks the magic:
aim up
Usage
Let's get started with generating dialogues using the llm-roleplay
action. The process is simple: just provide the name of the configuration file containing the action, followed by the action name itself. For the llm-roleplay
action, we'll initiate it by using the Mistral 8x7B model as the inquirer. 🎇
urartu launch --name=llm_roleplay action_config=dialogue_generator aim=aim slurm=slurm +action_config/task/model_inquirer=mixtral +action_config/task/model_responder=llama action_config.task.model_inquirer.api_token="YOUR_TOKEN"
The aim
and slurm
configs read the Aim and Slurm configurations from aim
and slurm
files which are located in llm_roleplay/configs_{username}/aim/aim.yaml
and llm_roleplay/configs_{username}/slurm/slurm.yaml
respectively. The action_config
parameter specifies which configuration file to use to run the action. Afterward, we define the configuration file for the inquirer using the model_inquirer
argument and set the configuration for the responder with the model_responder
argument.
To execute the command on a Slurm cluster, modify the llm_roleplay/configs_{username}/slurm/slurm.yaml
file with the corresponding fields, and then use the same command to run the job. For more details on how to edit the configuration files, please refer to the upcoming sections.
Huggingface Authentication You might need to log in to HuggingFace to authenticate your use of Mistral 8x7B. To do this, use the
huggingface-cli
login command and provide your access token. To obtain a HuggingFace access token, refer to User access tokens.
Configs: Tailoring Your Setup
The default configs which shape the way of configs are defined in urartu
under urartu/config
directory:
urartu/config/main.yaml
: This core configuration file sets the foundation for default settings, covering all available keys within the system.urartu/config/action_config
Directory: A designated space for specific action configurations. For more see the structure of UrarTU.
Crafting Customizations
You have two flexible options for tailoring your configurations in llm-roleplay
.
-
Custom Config Files: To simplify configuration adjustments,
llm-roleplay
provides a dedicatedconfigs
directory where you can store personalized configuration files. These files seamlessly integrate with Hydra's search path. The directory structure mirrors that ofurartu/config
. You can define project-specific configurations in specially named files. Thedialogue_generator.yaml
file within theconfigs
directory houses all the configurations specific to ourllm-roleplay
project, with customized settings.- **Personalized User Configs**: To further tailor configurations for individual users, create a directory named `configs_{username}` at the same level as the `configs` directory, where `{username}` represents your operating system username (check out `configs_tamoyan` for an example). The beauty of this approach is that there are no additional steps required. Your customizations will smoothly load and override the default configurations, ensuring a seamless and hassle-free experience. ✨ The order of precedence for configuration overrides is as follows: `urartu/config`, `llm_roleplay/configs`, `llm_roleplay/configs_{username}`, giving priority to user-specific configurations.
-
CLI Approach: For those who prefer a command-line interface (CLI) approach,
urartu
offers a convenient method. You can enhance your commands with specific key-value pairs directly in the CLI. For example, modifying your working directory path is as simple as:urartu launch --name=llm_roleplay action_config=dialogue_generator action_config.workdir=PATH_TO_WORKDIR
Choose the method that suits your workflow best and enjoy the flexibility urartu
provides for crafting custom configurations.
Effortless Launch
With urartu
, launching actions is incredibly easy, offering you two options. 🚀
- Local Marvel: This option allows you to run jobs on your local machine, right where the script is executed.
- Cluster Voyage: This choice takes you on a journey to the Slurm cluster. By adjusting the
slurm.use_slurm
setting inllm_roleplay/configs/action_config/dialogue_generator.yaml
, you can easily switch between local and cluster execution.
Enjoy the flexibility to choose the launch adventure that best suits your needs and goals!
You're all set to dive into goal-oriented, persona-based, diverse, and multi-turn dialogue generation with llm-roleplay
! 🌟 If you encounter any issues or have suggestions, feel free to open an issue for assistance. 😊
Cite
Please use the following citation:
@misc{tamoyan2024llmroleplaysimulatinghumanchatbot,
title={LLM Roleplay: Simulating Human-Chatbot Interaction},
author={Hovhannes Tamoyan and Hendrik Schuff and Iryna Gurevych},
year={2024},
eprint={2407.03974},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.03974},
}
Contacts
Hovhannes Tamoyan, Hendrik Schuff
Please feel free to contact us if you have any questions or need to report any issues.
Links
UKP Lab Homepage | TU Darmstadt Website
Disclaimer
This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llm-roleplay-2.0.7.tar.gz
.
File metadata
- Download URL: llm-roleplay-2.0.7.tar.gz
- Upload date:
- Size: 19.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4fd6aef83e1b01c3e2b3b2df893794ed16f83d5ff9c1248dd4e8219caf8a4928 |
|
MD5 | 0db3b0ba821bc150f2b4eac4311c7d31 |
|
BLAKE2b-256 | 1189cb6bf581bb022ede4ea6dd5f27bcd78915e90c7028393ad50e333fa93ecd |
File details
Details for the file llm_roleplay-2.0.7-py3-none-any.whl
.
File metadata
- Download URL: llm_roleplay-2.0.7-py3-none-any.whl
- Upload date:
- Size: 16.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8fa5c094c35d0e0fc6d28068e0b15569f4079a07b2f63a8e121da3452a9b16b6 |
|
MD5 | 8c0bc736f1b21fe3d80d1c78a138057b |
|
BLAKE2b-256 | 7f9e534ca1d6adf75a03753e18a2854499c475fe1dd8eb92ee947afaa7b32305 |