Provides some of the tools the AI Horde uses for safety.
Project description
horde-safety
Provides safety features used by the horde, especially to do with image generation.
Note
This library is made with the default AI Horde worker in mind, and relies on the environment variable AIWORKER_CACHE_HOME
to establish isolation of the clip models on disk. If you do not want to rely on a horde specific folder structure, define TRANSFORMERS_CACHE
to define where you'd prefer the models to be. If neither are defined, the default huggingface folder location for the system used, typically ~/.cache
, depending on other environnement variables, see the official huggingface docs for more info.
Installing
Make sure pytorch is installed, preferably with CUDA/ROCM support.
pip install horde_safety
Use
This library currently relies on clip_interrogator. The check_for_csam
function requires an instance of clip_interrogator.Interrogator
to be passed. You can pass in on yourself, or use the helper function get_interrogator_no_blip
(note that calling this function immediately loads the CLIP model). Use the device
parameter to choose the device to load to and use for interrogation. If you want to only use the CPU but have CUDA pytorch installed, use get_interrogator_no_blip(device="cpu")
.
import PIL.Image
from horde_safety.deep_danbooru_model import get_deep_danbooru_model
from horde_safety.interrogate import get_interrogator_no_blip
from horde_safety.nsfw_checker_class import NSFWChecker, NSFWResult
interrogator = get_interrogator_no_blip() # Will trigger a download if not on disk (~1.2 gb)
deep_danbooru_model = get_deep_danbooru_model() # Will trigger a download if not on disk (~614 mb)
nsfw_checker = NSFWChecker(
interrogator,
deep_danbooru_model, # Optional, significantly improves results for anime images
)
image: PIL.Image.Image = PIL.Image.open("image.jpg")
prompt: str | None = None # if this is an image generation, you can provide the prompt here
model_info: dict | None = None # if this is an image generation, you can provide the model info here
nsfw_result: NSFWResult | None = nsfw_checker.check_for_nsfw(image, prompt=prompt, model_info=model_info)
if nsfw_result is None:
print("No NSFW result (Did it fail to load the image?)")
exit(1)
if nsfw_result.is_anime:
print("Anime detected!")
if nsfw_result.is_nsfw:
print("NSFW detected!")
if nsfw_result.is_csam:
print("CSAM detected!")
exit(1)
If you reject a job as a horde worker for CSAM, you should report 'state': 'csam'
in the generate submit payload.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file horde_safety-0.2.3.tar.gz
.
File metadata
- Download URL: horde_safety-0.2.3.tar.gz
- Upload date:
- Size: 69.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 80abde03b01c313a155d54ce9aad9d979e1a926e6ae7cdf2693f1202720b0491 |
|
MD5 | ea65236cde958ad602babeba61c757ee |
|
BLAKE2b-256 | 20b53d6dd84a6a66663d43bf96bf32ba87f2b4311fe20d40bc886e09c6ba3669 |
File details
Details for the file horde_safety-0.2.3-py3-none-any.whl
.
File metadata
- Download URL: horde_safety-0.2.3-py3-none-any.whl
- Upload date:
- Size: 50.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 81a596c7ae19cb915e55309e8a779afe90424eb8e91a5f24f56cf89506572141 |
|
MD5 | f5ed03f827d136dc0589db69f8bb0027 |
|
BLAKE2b-256 | 15a3475489a962184c54328ce6abadf57ad5aace32fa5fee6ea61da36a17893f |