Do you know Stable Diffusion can be used to extend an image in any directions? The function is called outpainting. It can generate coherent background that was originally out of frame. In this article, we will walk through how to do it step-by-step in AUTOMATIC1111 GUI. I will also introduce other excellent outpainting models as alternatives.
Software needed
We will use AUTOMATIC1111, a popular and full-feature Stable Diffusion GUI, in this guide. We will use the 1-click launch Colab Notebook in the Quick Start Guide. See instructions to use. You can also install this GUI on Windows and Mac.
When launching the notebook, make sure to select F222 model which will be used in this tutorial.

Step-by-step guide
The first step is to get your image ready. I will use this image generated by Stable Diffusion. In this tutorial, we will use the following image as the starting point. We will use outpainting to center the image and turn it to landscape size.

You can download this image using the button below to follow the tutorial.
Upload image to AUTOMATIC1111
If the image was generated by AUTOMATIC1111 GUI, the prompts and other generation parameters were written into PNG file”s meta data.
In AUTOMATIC1111 GUI, Go to PNG Info tab. Drag and drop the image from your local storage to the canvas area. The generation parameters should appear on the right.
Press Send to img2img to send this image and parameters for outpainting. The image and prompt should appear in the img2img sub-tab of the img2img tab.

If your starting image is not created by AUTOMATIC1111 GUI, simply proceed to the img2img tab. Upload the image to the img2img canvas. You will need to write a prompt to accurately describe the image and style. (You may have the prompt automatically generated by clicking “Interrogate CLIP”. But it is not very good in my opinion.)
Adjust parameters for outpainting

First you will need to select an appropriate model for outpainting. For consistency in style, you should use the same model that generates the image. For example, I used F222 model so I will use the same model for outpainting.
If you used the base model v1.4 and v1.5, or you are using a photograph, you can also use the v1 inpainting model. It is supposed to give better results. Although I have no problem without using it.
The image size should have automatically set correctly if you used PNG Info. For a custom image, you should set the shorter side to the native resolution of the model, e.g. 512 px for v1 models. The longer side should be adjusted accordingly to maintain the aspect ratio.
Set resize mode to crop and resize so that the aspect ratio won’t change.
Set seed to -1 to get a different result each time.
Denoising strength will be the knot you will have a lot of fun to play with… for now let’s set it to 0.6.
You can use your standard text-to-image setting for the rest. For completeness, here are what I use
- Sampling method: DPM++ 2M Karras
- Sampling Steps: 30
- Batch size: 4
This is my setting section look like:

Enable outpainting script
Scroll down and you should see a Script drop box. There are two options for outpainting: (1) outpainting mk2 and (2) poor man’s outpainting. Outpainting mk2 doesn’t work very well. Choose Poor man’s outpainting.

You can leave Pixels to expand at 128 pixels. Pick fill for masked content. It will use the average color of the image to fill in the expanded area before outpainting.
It is best to outpaint one direction at a time. I pick right for outpaint direction for this image.
I am reusing the original prompt.
Press Generate and you are in business! Regenerate as many time as needed until you see an image you like.
Increase denoising strength to change more. Decrease denoising strength to change less. It is that simple.
Center an image
This is what got. She is no longer on the right of the photo but is dead center. The expanded pixels are visually consistent with the rest of the image. I am pleased!

Once you are happy with one side, you can hit Send to img2img under the result canvas to iterate the process.
Convert to landscape size
Let’s extend the left and right sides multiple times so that the portrait-sized image becomes landscape. This can change the perception of the image completely. Now it is not a close-up of the subject. The expansive dystopia city background creates a great contrast and tells a good story.

Fix details with inpainting
You don’t need to be too hung up on the little details of the extended part because you can always regenerate any areas later with inpainting. Below I will show you how to regenerate the whole right hand side.
First, press Send to inpainting to send your newly generated image to the inpainting tab. Make sure to select the Inpaint tab. Use the paintbrush tool to create a mask on the area you want to regenerate.

The settings I used are
- Mask mode: Inpaint masked
- Inpaint area: Only masked
- Only masked padding, pixels: 36-72 (adjust as need)
- Script: None (Don’t forget the turn off the outpainting script!)
- Denoising strength: 0.6-0.9. You will want to look at the result and adjust this. Increase to change more.
- Batch size: 2-4. Generate multiple images at a time for comparison.
- Seed: -1 (random)
- Masked content: original or fill. (Fill will use the average color under the mask as the initial value)
Below is a screenshot of my inpainting settings. See Inpainting Guide for more detailed explanations.

After a few rounds of inpainting, this is my final image in landscape size.

Outpainting complex scenes
The above Stable Diffusion method works well with simple background. It will struggle with complex scenes. The reason is that the outpainting method only considers a small area of image adjacent to the outpainted area. To extend a complex scene, long range information need to be accounted for.
It is not Stable Diffusion (Sorry Stable Diffusion purist!), but there’s an outstanding inpainting/outpainting method available called MAT (Mask-Aware Transformer). It is a GAN model designed to account for long range information when making up parts of an image.
Failure example of Stable Diffusion outpainting
To illustrate the importance of modeling long range information, let’s outpaint the following complex street scene.

Using the above method to expand the right hand side, we get this image:

The extended part looks ok on its own but is not coherent to the rest of the image as a whole.
MAT outpainting
MAT outpainting is not only faster, but gives better results. See the image below extended by MAT. It’s not perfect, but much better.

Go to Stable Diffusion Mat Outpainting to use MAT. The GUI only allows you to generate a square image. You can generate a larger square image and crop it to landscape size. Scale is the scaling applied on the uploaded image before outpainting. I set scale to 1 in the above image and set the output size to 768 so that it outpaints a 512×768 image to 768×768, extending the left and right sides.
MAT also worked quite well with our example. Below is the image outpainted laterally.

Other outpainting options
If you don’t want to use AUTOMATIC1111 for whatever reason, this section is for you.
Stable Diffusion Infinity
Stable Diffusion Infinity is a nice graphical outpainting user interface. The Huggingface demo is free to use. It can run pretty fast if no one else is using. You can also launch a colab notebook to run your own instance.
The interface let you outpaint one tile at a time. You will need to write a prompt for each tile. Each tile should have a small overlap with the existing image.

There’s a demo video in the Github page to show you how to use it.