This is done by combining a Diffusion model with ControlNet interface. As long as you have a decently modern Nvidia GPU and familiarity with Python and Pytorch it’s relatively simple to create your own model.
I implemented this paper back in March. It’s as simple as it is brilliant. By using methods originally intended to adapt large pre-trained language models to a specific application, the author’s created a new model architecture that can better control the output of a diffusion model.
It’s probably Stable diffusion. I use comfyui since you can watch the sausage get made but there’s also other UIs like automatic1111. Originally for a qr pattern beautifier, there is a controlnet that takes a two tone black and white “guide” image. but you can guide it to follow any image you feed it. Such as a meme edited to be black and white, or text like “GAY SEX.”
Would like to know that as well. I just stole the meme from non-fediverse meme site
This is done by combining a Diffusion model with ControlNet interface. As long as you have a decently modern Nvidia GPU and familiarity with Python and Pytorch it’s relatively simple to create your own model.
The ControlNet paper is here: https://arxiv.org/pdf/2302.05543.pdf
I implemented this paper back in March. It’s as simple as it is brilliant. By using methods originally intended to adapt large pre-trained language models to a specific application, the author’s created a new model architecture that can better control the output of a diffusion model.
It’s probably Stable diffusion. I use comfyui since you can watch the sausage get made but there’s also other UIs like automatic1111. Originally for a qr pattern beautifier, there is a controlnet that takes a two tone black and white “guide” image. but you can guide it to follow any image you feed it. Such as a meme edited to be black and white, or text like “GAY SEX.”
Looks like somebody created an outline mask and then used that in img2img with a prompt for the particular scenery.
I remember seeing somebody use that technique to generate the same pose and model but different colored outfits with that technique
Maybe Canny nap like used here
https://youtu.be/8cVnooYgpDc?t=13m
There’s also SD.Next with some enhancements to automatic1111.
I’ve used InvokeAI (stable diffusion models) with ControlNets for it too, it’s a bit easier to use than comfy/a1111 but not as powerful IMHO.
Check this out:
https://github.com/camenduru/controlnet-colab
I started looking after you responded. Havnt gotten it to work yet.