• If you missed the recent discussion about InstructPix2Pix, it’s is a model that’s been trained to make edits to an image using natural language prompts. Take a look at this page for more information and examples: Edit: Hijacking my most upvoted comment to summarise some of the other information in this thread. To use this you need to update to the latest version of A1111 and download the instruct-pix2pix-00-22000.safetensors file from this page: Put the file in the models\Stable-diffusion folder alongside your other Stable Diffusion checkpoints. Restart the WebUI, select the new model from the checkpoint dropdown at the top of the page and switch to the Img2Img tab. There should now be an “Image CFG Scale” setting alongside the “CFG Scale”. The “Image CFG Scale” determines how much the result resembles your starting image, so a lower value means a stronger effect - the opposite to the CFG Scale. (View Highlight)