It’s probably Stable diffusion. I use comfyui since you can watch the sausage get made but there’s also other UIs like automatic1111. Originally for a qr pattern beautifier, there is a controlnet that takes a two tone black and white “guide” image. but you can guide it to follow any image you feed it. Such as a meme edited to be black and white, or text like “GAY SEX.”
It’s probably Stable diffusion. I use comfyui since you can watch the sausage get made but there’s also other UIs like automatic1111. Originally for a qr pattern beautifier, there is a controlnet that takes a two tone black and white “guide” image. but you can guide it to follow any image you feed it. Such as a meme edited to be black and white, or text like “GAY SEX.”
There’s also SD.Next with some enhancements to automatic1111.
Looks like somebody created an outline mask and then used that in img2img with a prompt for the particular scenery.
I remember seeing somebody use that technique to generate the same pose and model but different colored outfits with that technique
Maybe Canny nap like used here
https://youtu.be/8cVnooYgpDc?t=13m
I’ve used InvokeAI (stable diffusion models) with ControlNets for it too, it’s a bit easier to use than comfy/a1111 but not as powerful IMHO.