Reverse-Engineering Stable Diffusion Prompts

A step-by-step guide to finding which prompt created an AI-generated image
Reverse-Engineering Stable Diffusion Prompts

Have you ever come across an AI-generated image and wondered, “What prompt did they use to make that?” You might think that unless the artist shares their process, there’s no way to find out. But what if you could work backwards, starting from an image, to discover the prompt that generated it?

My tool ImageToPrompt allows you to upload an image and generate a detailed prompt that captures its key elements, from the main subject matter to the artistic style, color palette, and composition. Since image models like Stable Diffusion are nondeterministic, there’s no way to recreate the exact prompt that the image was generated from, but we can get pretty close.

Here’s a step-by-step guide to reverse-engineering Stable Diffusion prompts with ImageToPrompt:

  1. Choose an image: Select an AI-generated image that you find striking and want to analyze.
  2. Upload to ImageToPrompt: Upload the image to ImageToPrompt.com. ImageToPrompt will generate prompts for SD and SDXL.
  3. Experiment and refine: Use the generated prompt as a starting point for your own experiments. Modify the prompt based on your observations and generate new images using Stable Diffusion. Refine your prompt iteratively until you achieve what you’re looking for.

It’s that simple. ImageToPrompt is a work in progress, so if you have any feedback, don’t hesitate to email me at charlie@imagetoprompt.com. Thanks for trying it out!

Try uploading an image at ImageToPrompt.com →