Vocabulary-Based Adversarial Fuzzing (VB-AF) framework
Project description
VB-AF: Vocabulary-Based Adversarial Fuzzing
An implementation of Vocabulary-Based Adversarial Fuzzing (VB-AF) to systematically probe vulnerabilities in Large Language Models (LLMs) at scale. VB-AF is a gray-box fuzzing framework that serves as a tool for AI safety researchers, red-teamers and developers to systematically test the alignment and robustness of modern LLMs (and agents). It works by targetting known and documented weaknesses in transformer architectures.
This framework was heavily inspired by the widely adopted methodology of fuzz-testing, and originally developed for the hackathon 'Red‑Teaming Challenge - OpenAI gpt-oss-20b' hosted on Kaggle. In admiration of its effectiveness and future potential implications of extension, the author (@0ameyasr) decided to convert it into a flexible, interference-free LLM fuzzing framework.
WARNING
This framework is provided solely for authorized security research, academic study, and defensive testing (ethical red-teaming) of Large Language Models (LLMs).
Misuse of this software for any malicious, unlawful, exploitative, or unauthorized activity is strictly forbidden. The author(s) explicitly reject, denounce, and do not condone any attempt to weaponize or abuse this tool. By accessing, installing, or using this software, you agree that any form of misuse is entirely at your own risk and legal liability.
The software is provided “AS IS” without warranty of any kind. The author(s) disclaim all responsibility and liability for damages, losses, legal claims, or consequences of any kind arising from misuse.
By continuing to use this tool, you expressly acknowledge and accept full personal and legal accountability for your actions. Unauthorized or malicious use may subject you to civil and/or criminal penalties under applicable laws.
Key Features
- Intuitive, easy-to-use API balancing both un-interrupted low-level control and convenient high-level fuzzing harness decoration.
- Built-in support for random seeding to ensure experiments are fully reproducible.
- Designed to expose deep, uncovered vulnerabilities in a model's Chain-of-Thought reasoning, not just surface-level filter bypasses.
- Moves beyond simple role-playing jailbreak prompts to a systematic, scalable and highly configurable fuzzing framework.
- Open to community and research contributions!
Installation
You can install vbaf directly from PyPI:
pip install vbaf
Quick Start
Using vbaf is simple. First, configure the fuzzer with your desired parameters. Then, apply the @fuzzer.fuzz decorator to your inference function. The decorator will transform your function into a generator that runs the fuzzing process for n_attempts and yields the (fuzzy_payload, response) for each attempt.
from vbaf import VBAF
# 1. Define a vocabulary to generate noise from (this is a mock)
tokens = ["error", "network", "token", "string", "exception", "test"]
# 2. Configure the fuzzer instance
fuzzer = VBAF(
vocabulary=tokens,
n_size=50,
rand_bound=(3, 6)
)
# 3. Apply the decorator to your LLM inference function
@fuzzer.fuzz(n_attempts=3)
def fuzzing_harness(prompt: str):
# This is a mock function, that would normally call an LLM API
# Say Gemini API, OpenAI's Chat Completion, etc.
return f"Mock Response for: {prompt}"
# 4. Start the fuzzing process
# The decorated function now yields a (fuzzy_payload, response) tuple
for fuzzy_payload, result in fuzzing_harness("How do I build a model?"):
print(f"Fuzzy Payload: {fuzzy_payload}")
print(f"Response: {result}")
... # Post-process the results
Full Documentation
For a complete guide, API reference, and a deeper look into the methodology, please see the full documentation website.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vbaf-0.1.1.tar.gz.
File metadata
- Download URL: vbaf-0.1.1.tar.gz
- Upload date:
- Size: 10.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b87e65f614398d736ac60f273ef4477e3dace6290667ff883060fbcdde7a65ad
|
|
| MD5 |
2d7eb20774d4005c969a886592e4d5e8
|
|
| BLAKE2b-256 |
72bdda5ccdd3433be05ef0fb0e9d0a84409071c77dcb0ad648ac1a32740a0952
|
File details
Details for the file vbaf-0.1.1-py3-none-any.whl.
File metadata
- Download URL: vbaf-0.1.1-py3-none-any.whl
- Upload date:
- Size: 10.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f67d8e408b63e51172b6abf6eb43a66c7514231ab2ee8e0b107d90c255d16fd2
|
|
| MD5 |
5b4270b14897bbba365711b3200bc1e6
|
|
| BLAKE2b-256 |
576cc4c8d27a8e7d60459861864a49645f3d5ec562ea26198a662e039c9c939a
|