Click to cycle through images.
ControlNet, a new extension for Stable Diffusion, enables the use of a reference image to control the output of text-to-image generation. This means instead of coloring an image and then using image-to-image transformation, it's possible to go directly from a sketch to a fully painted image and maintain the position, pose and proportions of a character.
This example shows raw outputs cherry picked from ~100 generated images.
Comments