6 projects
nbdefense
NB Defense CLI and SDK
modelscan
The modelscan package is a cli tool for detecting unsafe operations in model files across various model serialization formats.
guardian-client
Python SDK for Protect AI Guardian
llm-guard
LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.
rebuff
Rebuff is designed to protect AI applications from prompt injection (PI) attacks through a multi-layered defense.
nbdefense_jupyter
NB Defense Jupyter Lab Extension