Skip to main content

Create Disco Diffusion artworks in one line

Project description

Create compelling Disco Diffusion artworks in one line

PyPI Docker Cloud Build Status Open in Google Colab

DiscoArt is an elegant way of creating compelling Disco Diffusion[*] artworks for generative artists, AI enthusiasts and hard-core developers. DiscoArt has a modern & professional API with a beautiful codebase, ensuring high usability and maintainability. It introduces handy features such as result recovery and persistence, gRPC/HTTP serving w/o TLS, post-analysis, easing the integration to larger cross-modal or multi-modal applications.

[*] Disco Diffusion is a Google Colab Notebook that leverages CLIP-Guided Diffusion to allow one to create compelling and beautiful images from text prompts.

💯 Best-in-class: top-notch code quality, correctness-first, minimum dependencies; including bug fixes, feature improvements vs. the original DD5.6.

👼 Available to all: smooth install for self-hosting, Google Colab free tier, non-GUI (IPython) environment, and CLI! No brainfuck, no dependency hell, no stackoverflow.

🎨 Focus on create not code: one-liner create() with a Pythonic interface, autocompletion in IDE, and powerful features. Fetch real-time results anywhere anytime, no more worry on session outrage on Google Colab. Set initial state easily for more efficient parameter exploration.

🏭 Ready for integration & production: built on top of DocArray data structure, enjoy smooth integration with Jina, CLIP-as-service and other cross-/multi-modal applications.

☁️ As-a-service: simply python -m discoart serve, DiscoArt is now a high-performance low-latency service supports gRPC/HTTP/websockets and TLS. Scaling up/down is one-line; Cloud-native features e.g. Kubernetes, Prometheus and Grafana is one-line. Unbelievable simple thanks to Jina.

Gallery with prompts

Install

Python 3.7+ and CUDA-enabled PyTorch is required.

pip install discoart

This applies to both self-hosting, Google Colab, system integration, non-GUI environments.

Get Started

Open in Google Colab

Create artworks

from discoart import create

da = create()

That's it! It will create with the default text prompts and parameters.

Set prompts and parameters

Supported parameters are listed here. You can specify them in create():

from discoart import create

da = create(
    text_prompts='A painting of sea cliffs in a tumultuous storm, Trending on ArtStation.',
    init_image='https://d2vyhzeko0lke5.cloudfront.net/2f4f6dfa5a05e078469ebe57e77b72f0.png',
    skip_steps=100,
)

In case you forgot a parameter, just lookup the cheatsheet at anytime:

from discoart import cheatsheet

cheatsheet()

The difference on the parameters between DiscoArt and DD5.6 is explained here.

Visualize results

Final results and intermediate results are created under the current working directory, e.g.

./{name-docarray}/{i}-step-{j}.png
./{name-docarray}/{i}-progress.png
./{name-docarray}/{i}-done.png

where:

  • name-docarray is the name of the run, you can specify it otherwise it is a random name.
  • i-* is up to the value of n_batches.
  • *-done-* is the final image on done.
  • *-step-* is the intermediate image at certain step.
  • *-progress-* is the sprite image of all intermediate results so far.

Moreover, create() returns da, a DocumentArray-type object. It contains the following information:

  • All arguments passed to create() function, including seed, text prompts and model parameters.
  • 4 generated image and its intermediate steps' images, where 4 is determined by n_batches and is the default value.

This allows you to further post-process, analyze, export the results with powerful DocArray API.

Images are stored as Data URI in .uri, to save the first image as a local file:

da[0].save_uri_to_file('discoart-result.png')

To save all final images:

for idx, d in enumerate(da):
    d.save_uri_to_file(f'discoart-result-{idx}.png')

You can also display all four final images in a grid:

da.plot_image_sprites(skip_empty=True, show_index=True, keep_aspect_ratio=True)

Or display them one by one:

for d in da:
    d.display()

Or take one particular run:

da[0].display()

Visualize intermediate steps

You can also zoom into a run (say the first run) and check out intermediate steps:

da[0].chunks.plot_image_sprites(
    skip_empty=True, show_index=True, keep_aspect_ratio=True
)

You can .display() the chunks one by one, or save one via .save_uri_to_file(), or save all intermediate steps as a GIF:

da[0].chunks.save_gif(
    'lighthouse.gif', show_index=True, inline_display=True, size_ratio=0.5
)

Export configs

You can review its parameters from da[0].tags or export it as an SVG image:

from discoart.config import save_config_svg

save_config_svg(da)

Pull results anywhere anytime

If you are a free-tier Google Colab user, one annoy thing is the lost of sessions from time to time. Or sometimes you just early stop the run as the first image is not good enough, and a keyboard interrupt will prevent .create() to return any result. Either case, you can easily recover the results by pulling the last session ID.

  1. Find the session ID. It appears on top of the image.

  2. Pull the result via that ID on any machine at any time, not necessarily on Google Colab:

    from docarray import DocumentArray
    
    da = DocumentArray.pull('discoart-3205998582')
    

Reuse a Document as initial state

Consider a Document as a self-contained data with config and image, one can use it as the initial state for the future run. Its .tags will be used as the initial parameters; .uri if presented will be used as the initial image.

from discoart import create
from docarray import DocumentArray

da = DocumentArray.pull('discoart-3205998582')

create(
    init_document=da[0],
    cut_ic_pow=0.5,
    tv_scale=600,
    cut_overview='[12]*1000',
    cut_innercut='[12]*1000',
    use_secondary_model=False,
)

Environment variables

You can set environment variables to control the meta-behavior of DiscoArt. The environment variables must be set before importing DiscoArt, either in Bash or in Python via os.environ.

DISCOART_LOG_LEVEL='DEBUG' # more verbose logs
DISCOART_OPTOUT_CLOUD_BACKUP='1' # opt-out from cloud backup
DISCOART_DISABLE_IPYTHON='1' # disable ipython dependency
DISCOART_DISABLE_RESULT_SUMMARY='1' # disable result summary after the run ends
DISCOART_DEFAULT_PARAMETERS_YAML='path/to/your-default.yml' # use a custom default parameters file
DISCOART_CUT_SCHEDULES_YAML='path/to/your-schedules.yml' # use a custom cut schedules file
DISCOART_MODELS_YAML='path/to/your-models.yml' # use a custom list of models file

CLI

DiscoArt provides two commands create and config that allows you to run DiscoArt from CLI.

python -m discoart create my.yml

which creates artworks from the YAML config file my.yml. You can also do:

cat config.yml | python -m discoart create

So how can I have my own my.yml and what does it look like? That's the second command:

python -m discoart config my.yml

which forks the default YAML config and export them to my.yml. Now you can modify it and run it with python -m discoart create command.

If no output path is specified, then python -m discoart config will print the default config to stdout.

To get help on a command, add --help at the end, e.g.:

python -m discoart create --help
usage: python -m discoart create [-h] [YAML_CONFIG_FILE]

positional arguments:
  YAML_CONFIG_FILE  The YAML config file to use, default is stdin.

optional arguments:
  -h, --help        show this help message and exit

Serving

Serving DiscoArt is super easy. Simply run the following command:

python -m discoart serve

You shall see:

Now send request to the server via curl/Javascript, e.g.

curl \
-X POST http://0.0.0.0:51001/post \  # use private/public if your server is remote
-H 'Content-Type: application/json' \
-d '{"parameters": {"text_prompts": ["A beautiful painting of a singular lighthouse", "yellow color scheme"]}}'

That's it.

You can of course pass all parameters that accepted by create() function in the JSON.

Scaling out

If you have multiple GPUs and you want to run multiple DiscoArt instances in parallel by leveraging GPUs in a time-multiplexed fashion, you can copy-paste the default flow.yml file and modify it as follows:

jtype: Flow
with:
  protocol: http
  monitoring: true
  port: 51001
  port_monitoring: 51002  # prometheus monitoring port
  env:
    JINA_LOG_LEVEL: debug
    DISCOART_DISABLE_IPYTHON: 1
    DISCOART_DISABLE_RESULT_SUMMARY: 1
executors:
  - name: discoart
    uses: DiscoArtExecutor
    env:
      CUDA_VISIBLE_DEVICES: RR0:3  # change this if you have multiple GPU
    replicas: 3  # change this if you have larger VRAM

Here replicas: 3 says spawning three DiscoArt instances, CUDA_VISIBLE_DEVICES: RR0:3 makes sure they use the first three GPUs in a round-robin fashion.

Name it as myflow.yml and then run

python -m discoart serve myflow.yml

Customization

Thanks to Jina, there are tons of things you can customize! You can change the port number; change protocol to gRPC/Websockets; add TLS encryption; enable/disable Prometheus monitoring; you can also export it to Kubernetes deployment bundle simply via:

jina export kubernetes myflow.yml

For more features and YAML configs, please check out Jina docs.

Hosting on Google Colab

Though not recommended, it is also possible to use Google Colab to host DiscoArt server. Please check out the following tutorials:

Run in Docker

Docker Image Size (tag)

We provide a prebuilt Docker image for running DiscoArt out of the box. To update Docker image to latest version:

docker pull jinaai/discoart:latest

Use Jupyter notebook

The default entrypoint is starting a Jupyter notebook

# docker build . -t jinaai/discoart  # if you want to build yourself
docker run -p 51000:8888 -v $(pwd):/home/jovyan/ -v $HOME/.cache:/root/.cache --gpus all jinaai/discoart

Now you can visit http://127.0.0.1:51000 to access the notebook

Use as a service

# docker build . -t jinaai/discoart  # if you want to build yourself
docker run --entrypoint "python -m discoart serve" -p 51001:51001 -v $(pwd):/home/jovyan/ -v $HOME/.cache:/root/.cache --gpus all jinaai/discoart

Your DiscoArt server is now running at http://127.0.0.1:51001.

Release cycle

Docker images are built on every release, so one can lock it to a specific version, say 0.5.1:

docker run -p 51000:8888 -v $(pwd):/home/jovyan/ -v $HOME/.cache:/root/.cache --gpus all jinaai/discoart:0.5.1

What's next?

Next is create.

😎 If you are already a DD user: you are ready to go! There is no extra learning, DiscoArt respects the same parameter semantics as DD5.6. So just unleash your creativity! Read more about their differences here.

You can always do from discoart import cheatsheet; cheatsheet() to check all new/modified parameters.

👶 If you are a DALL·E Flow or new user: you may want to take step by step, as Disco Diffusion works in a very different way than DALL·E. It is much more advanced and powerful: e.g. Disco Diffusion can take weighted & structured text prompts; it can initialize from a image with controlled noise; and there are way more parameters one can tweak. Impatient prompt like "armchair avocado" will give you nothing but confusion and frustration. I highly recommend you to check out the following resources before trying your own prompt:

Support

Join Us

DiscoArt is backed by Jina AI and licensed under MIT License. We are actively hiring AI engineers, solution engineers to build the next neural search ecosystem in open-source.

Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

discoart-0.6.2.tar.gz (56.9 kB view details)

Uploaded Source

File details

Details for the file discoart-0.6.2.tar.gz.

File metadata

  • Download URL: discoart-0.6.2.tar.gz
  • Upload date:
  • Size: 56.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.13

File hashes

Hashes for discoart-0.6.2.tar.gz
Algorithm Hash digest
SHA256 3bdebea3b09a1f9eb375e910c5c76e035aed181f94cc17be46c26c5fe1840129
MD5 d1ea24286ffd3789e9826f2b4a4de38d
BLAKE2b-256 877ab19aca46aebaba09ee307999ac6d142645fe077282de76628922d3507bfc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page