Skip to main content

Groq QA is a CLI tool and Python library that generates question-answer pairs from text to aid in fine-tuning large language models (LLMs).

Project description

๐Ÿฑ Groq QA Generator

Documentation Pytests License Contributors

Issues GitHub Release Last commit GitHub repo size Python Versions

Groqqy
Q: โ€œWould you tell me, please, which way I ought to go from here?โ€
A: โ€œThat depends a good deal on where you want to get to,โ€ said the Cat.
โ€” Alice's Adventures in Wonderland

Groq QA is a Python library that automates the creation of question-answer pairs from text, designed to aid in fine-tuning large language models (LLMs). Built with Groq, it leverages powerful models like LLaMA 3 with 70 billion parameters and 128K tokens, ideal for generating high-quality QA pairs. This tool streamlines the process of preparing custom datasets, helping improve LLM performance on specialized tasks with minimal manual effort. Itโ€™s particularly useful for fine-tuning models in research, education, and domain-specific applications.

Note: This project is not affiliated with or endorsed by Groq, Inc.

โœจ Features

โœจ Feature ๐Ÿ“„ Description
โœ… CLI Use the CLI tool with the groq-qa command.
โœ… Python Library Import directly to your own Python code.
โœ… Advanced Models Supports large models like LLaMA 3.1 70B via the Groq API.
โœ… Automated QA Generation Generate question-answer pairs from input text.
โœ… Prompt Templates Flexible question generation through prompt templates.
โœ… Customizable Configurations Configure via CLI, config.json, or directly in Python code.

๐Ÿ‘จโ€๐Ÿ’ป About the Developer

Hey there! Iโ€™m Jordan, a Canadian network engineer with over a decade of experience, especially from my time in the fast-paced world of California startups. My focus has been on automating test systems aligned with company KPIs, making it easier for teams to make data-driven decisions.

Whether itโ€™s tackling tough challenges, improving codebases, or working on innovative ideas, Iโ€™m always up for the task. Letโ€™s connect and make things happen! โœŒ๏ธ

ko-fi

๐Ÿ“„ Table of Contents

๐Ÿš€ Quick Start

  1. Install the package via pip:

    pip install groq-qa-generator
    
  2. Set up the API key:

    export GROQ_API_KEY=your_api_key_here
    
  3. Run the groq-qa command with default settings (~/.groq_qa/config.json):

    groq-qa
    
  4. View the results in ~/.groq_qa/qa_output.txt.

๐Ÿ“ฆ Upgrading

To ensure that you have the latest features, bug fixes, and improvements, it is recommended to periodically upgrade the groq_qa_generator package.

You can update groq_qa_generator to the latest version by running:

pip install --upgrade groq-qa-generator

๐Ÿ“– Documentation

You can access the full HTML documentation here:

๐Ÿ‘‰ Groq QA Generator Documentation ๐Ÿ‘ˆ

Documentation Generation

The documentation is automatically generated using Sphinx, a documentation generation tool for Python projects. Every change made to the documentation directory (docs/) triggers a GitHub Actions workflow that builds the HTML files and deploys them to GitHub Pages. This ensures that the documentation stays up-to-date with the latest project changes.

โš™๏ธ Using groq-qa

Setup the API Key

First, you need to acquire an API key. Sign up at the Groq website and following their instructions for generating a key.

Setting the Environment Variable

Once you have the key, export it as an environment variable using the following commands based on your operating system:

MacOS and Linux

export GROQ_API_KEY=gsk_example_1234567890abcdef

Windows:

set GROQ_API_KEY=gsk_example_1234567890abcdef

Command-Line Interface (CLI)

Once installed, the command groq-qa becomes available. By default, this command reads from default configuration located at ~/.groq_qa/config.json.

Here are a few examples of how to use the groq-qa command:

# Run with default config.json:
groq-qa 

# Output results in JSON format:
groq-qa --json 

 # Run with model, temperature, questions, and json overrides:
groq-qa --model llama3-70b-8192 --temperature 0.9 --questions 1 --json

CLI Options:

  • --model: The default model to be used for generating QA pairs is defined in config.json. The default is set to llama3-70b-8192.
  • --temperature: Controls the randomness of the model's output. Lower values will result in more deterministic and focused outputs, while higher values will make the output more random. The default is set to 0.1.
  • --questions: Allows you to specify the exact number of question-answer pairs to generate per chunk of text. For example, using 1 will force the system to generate 1 QA pair for each chunk, regardless of chunk size or token limits.
  • --json: If this flag is included, the output will be saved in a JSON format. By default, the output is stored as a plain text file. The default is set to False.

๐Ÿ›  Configuration

When you run the groq-qa command for the first time, a user-specific configuration directory (~/.groq_qa/) is automatically created. This directory contains all the necessary configuration files and templates for customizing input, prompts, and output.

Directory Structure

~/.groq_qa/
โ”œโ”€โ”€ config.json
โ”œโ”€โ”€ data
โ”‚   โ”œโ”€โ”€ alices_adventures_in_wonderland.txt
โ”‚   โ”œโ”€โ”€ sample_input_data.txt
โ”‚   โ””โ”€โ”€ prompts
โ”‚       โ”œโ”€โ”€ sample_question.txt
โ”‚       โ””โ”€โ”€ system_prompt.txt
โ”œโ”€โ”€ qa_output.json
โ””โ”€โ”€ qa_output.txt

Default config.json

{
    "system_prompt": "system_prompt.txt",
    "sample_question": "sample_question.txt",
    "input_data": "sample_input_data.txt",
    "output_file": "qa_output.txt",
    "model": "llama3-70b-8192",
    "chunk_size": 512,
    "tokens_per_question": 60,
    "temperature": 0.1,
    "max_tokens": 1024
}

๐Ÿ”ง Customizing the Configuration

The ~/.groq_qa directory contains essential files that can be customized to suit your specific needs. This directory includes the following components:

  • ๐Ÿ“„ config.json: This is the main configuration file where you can set various parameters for the QA generation process. You can customize settings such as:
    • ๐Ÿ“ system_prompt: Specify the path to your custom system prompt file that defines how the model should behave.
    • โ“ sample_question: Provide the path to a custom sample question file that helps guide the generation of questions.
    • ๐Ÿ“– input_data: Set the path to your own text file from which you want to generate question-answer pairs.
    • ๐Ÿ’พ output_file: Define the path where the generated QA pairs will be saved.

Other configurable options include:

  • ๐Ÿค– model: Select the model to be used for generating QA pairs (e.g., llama3-70b-8192).
  • ๐Ÿ“ chunk_size: Set the number of tokens for each text chunk (e.g., 512).
  • ๐Ÿช™ tokens_per_question: Specify the number of tokens allocated for each question (e.g., 60).
  • ๐Ÿ”ฅ temperature: Control the randomness of the model's output (e.g., 0.1).
  • ๐Ÿช™ max_tokens: Define the maximum number of tokens the model can generate in the response (e.g., 1024).

By adjusting these files and settings, you can create a personalized environment for generating question-answer pairs that align with your specific use case.

๐Ÿ‡ Input Data

This project uses text data from Alice's Adventures in Wonderland by Lewis Carroll, sourced from Project Gutenberg. The full text is available in the included data/alices_adventures_in_wonderland.txt file.

Sample Input Data

For demonstration purposes, a smaller sample of the full text is included in data/sample_input_data.txt. This file contains a portion of the main text, used to quickly test and generate question-answer pairs without processing the entire book.

๐Ÿค– Models

The groq_qa_generator currently supports the following models via the Groq API:

Model Name Model ID Developer Context Window Description Documentation Link
LLaMA 70B llama3-70b-8192 Meta 8,192 tokens A large language model with 70 billion parameters, suitable for high-quality QA pair generation. Model Card
LLaMA 3.1 70B llama-3.1-70b-versatile Meta 128k tokens A versatile large language model suitable for diverse applications. Model Card

Note: For optimal QA pair generation, it is recommended to use a larger model such as 70B, as its capacity helps ensure higher quality output. See Groq's supported models documentation for all options.

๐Ÿ Python Library

In addition to CLI usage, the groq_qa_generator can be used directly in your Python project. Below is an example of how to configure and execute the question-answer generation process using a custom configuration:

Example

# main.py

from groq_qa_generator import groq_qa

def main():
  # Define a custom configuration
    custom_config = {
        "system_prompt": "custom_system_prompt.txt",
        "sample_question": "custom_sample_question.txt",
        "input_data": "custom_input_data.txt",
        "output_file": "qa_output.txt",
        "model": "llama3-70b-8192",
        "chunk_size": 512,
        "tokens_per_question": 60,
        "temperature": 0.1,
        "max_tokens": 1024
    }

    qa_pairs = groq_qa.generate(custom_config)
    print(qa_pairs)

if __name__ == "__main__":
    main()

This allows you to integrate the functionality within any Python application easily.

๐Ÿง  Technical Overview

  1. ๐Ÿ”‘ API Interaction:

    • API Key Management: The API key is securely retrieved from environment variables to ensure safe access to the Groq API.
    • Client Initialization: A Groq API client is initialized to enable communication with the service, providing access to powerful models like LLaMA 70B.
  2. ๐Ÿ“„ Text Processing:

    • Loading Prompts and Questions: The library includes methods to load sample questions and system prompts from specified file paths. These prompts are essential for guiding LLaMA 70B's response.
    • Generating Full Prompts: The system prompt and sample question are combined into a complete prompt for the Groq API.
  3. ๐Ÿค– QA Pair Generation:

    • The core process involves taking a list of text chunks and generating question-answer pairs using the Groq API to prompt LLaMA 70B Groq-qa-generator:
      • Loads the system prompt and sample question.
      • Iterates through each text chunk, creating a full prompt for LLaMA 70B.
      • Retrieves the completion from the Groq API, and in turn the model.
      • Streams the completion response and converts it into question-answer pairs.
      • Writes the generated QA pairs to the output file.

๐Ÿงช Running Tests

To run the project's tests, you can use Poetry and pytest. Follow these steps:

  1. ๐Ÿ“ฆ Install Poetry: If you haven't already, install Poetry using pip.
    pip install poetry
    
  2. ๐Ÿ”ง Install Dependencies: Navigate to the project directory and install the dependencies.
    cd groq-qa-generator
    poetry install
    
  3. ๐Ÿƒ Run Tests: Use pytest to run the tests.
    poetry run pytest
    

The tests cover various components, including:

  • API interaction
  • Configuration loading
  • Tokenization
  • QA generation

๐Ÿค How to Contribute

  1. ๐Ÿด Fork the Repository: Click the "Fork" button at the top-right of the repository page to create your copy.

  2. ๐Ÿ“ฅ Clone Your Fork: Clone the forked repository to your local machine.

    git clone https://github.com/your-username/groq-qa-generator.git
    
  3. ๐ŸŒฟ Create a Branch: Use a descriptive name for your branch to clearly indicate the purpose of your changes. This helps maintain organization and clarity in the project.

    git checkout -b feature/your-feature-name
    
  4. ๐Ÿ”ง Set Up the Environment: Use Poetry to install the project dependencies.

    cd groq-qa-generator
    poetry install
    
  5. โš™๏ธ Confirm the Environment: Verify that the virtual environment has been correctly set up and activated.

    poetry shell
    
  6. ๐Ÿ“ฆ List Installed Packages: Ensure that the dependencies have been installed correctly by listing the installed packages.

    poetry show
    

โ“ FAQ

๐Ÿ“ Where can I find the generated QA pairs?

The generated QA pairs are saved to the output_file defined in your config.json file. By default, it saves the output in qa_output.txt, located in your home directoryโ€™s .groq_qa folder (~/.groq_qa/qa_output.txt).

To change the output file name, edit the output_file field in your config.json file:

{
    "output_file": "qa_custom_output.txt"
}

๐Ÿ›  Can I modify the sample question or system prompt templates?

Yes, both the system prompt and the sample question can be modified. These templates are located in the prompts directory inside the ~/.groq_qa/ folder:

  • system_prompt.txt: Defines how the model should behave and guide the generation process.
  • sample_question.txt: Defines how sample questions should be structured. Feel free to edit these templates to suit your needs.

๐Ÿ”„ How do I override default configuration settings?

You can override the default configuration settings in two ways:

  1. Edit the config.json file located in the ~/.groq_qa/ directory.
  2. Pass command-line arguments to override specific settings, for example:
groq-qa --model llama3-70b-8192 --temperature 0.9

๐ŸŽ› How can I increase the randomness of the output?

Increase the temperature value in the configuration or pass it as a command-line argument (e.g., --temperature 0.9).

๐Ÿ Can I use this tool in a larger Python project?

Yes, groq_qa_generator can be used as a Python library within your project. Simply import the module and configure it to generate QA pairs programmatically.

โš–๏ธ License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

groq_qa_generator-1.1.0.tar.gz (79.0 kB view hashes)

Uploaded Source

Built Distribution

groq_qa_generator-1.1.0-py3-none-any.whl (78.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page