Models, sometimes called checkpoint files, are pre-trained Stable Diffusion weights intended for generating general or a particular genre of images.
What images a model can generate depends on the data used to train them. A model won’t be able to generate a cat’s image if there’s never a cat in the training data. Likewise, if you only train a model with cat images, it will only generate cats.
What is fine-tuning?
Fine-tuning is a common technique in machine learning. It takes a model that is trained on a wide dataset and trains a bit more on a narrow dataset.
A fine-tuned model will be biased toward generating images similar to your dataset while maintaining the versatility of the original model.
Why do people make them?
Stable diffusion is great but is not good at everything. For example, it can and will generate anime-style images with the keyword “anime” in the prompt. But it could be difficult to generate images of a sub-genre of anime. Instead of tinkering with the prompt, you can fine-tune the model with images of that sub-genre.
How are they made?
Additional training is achieved by training a base model with an additional dataset you are interested in. For example, you can train Stable Diffusion v1.5 with an additional dataset of vintage cars to bias the aesthetic of cars towards the sub-genre.
Dreambooth, initially developed by Google, is a technique to inject custom subjects into text-to-image models. It works with as few as 3-5 custom images. You can take a few pictures of yourself and use Dreambooth to put yourself into the model. A model trained with Dreambooth requires a special keyword to condition the model.
There’s another less popular fine-tuning technique called textual inversion (sometimes called embedding). The goal is similar to Dreambooth: Inject a custom subject into the model with only a few examples. A new keyword is created specifically for the new object. Only the text embedding network is fine-tuned while keeping the rest of the model unchanged. In layman’s terms, it’s like using existing words to describe a new concept.
There are two groups of models: v1 and v2. I will cover the v1 models in this section and the v2 models in the next section.
There are thousands of fine-tuned Stable Diffusion models. The number is increasing every day. Below is a list of models that can be used for general purposes.
Stable diffusion v1.4
Released in August 2022 by Stability AI, v1.4 model is considered to be the first publicly available Stable Diffusion model.
You can treat v1.4 as a general-purpose model. Most of the time, it is enough to use it as is unless you are really picky about certain styles.
Stable diffusion v1.5
v1.5 is released in Oct 2022 by Runway ML, a partner of Stability AI. The model is based on v1.2 with further training.
The model page does not mention what the improvement is. It produces slightly different results compared to v1.4 but it is unclear if they are better.
Like v1.4, you can treat v1.5 as a general-purpose model.
In my experience, v1.5 is a fine choice as the initial model and can be used interchangeably with v1.4.
F222 is trained originally for generating nudes, but people found it helpful in generating beautiful female portraits with correct body part relations. Interestingly, contrary to what you might think, it’s quite good at generating aesthetically pleasing clothing.
F222 is good for portraits. It has a high tendency to generate nudes. Include wardrobe terms like “dress” and “jeans” in the prompt.
Find more realistic photo-style models in this post.
Anything V3 is a special-purpose model trained to produce high-quality anime-style images. You can use danbooru tags (like 1girl, white hair) in the text prompt.
It’s useful for casting celebrities to amine style, which can then be blended seamlessly with illustrative elements.
One drawback (at least to me) is that it produces females with disproportional body shapes. I like to tone it down with F222.
Open Journey is a model fine-tuned with images generated by Mid Journey v4. It has a different aesthetic and is a good general-purpose model.
Triggering keyword: mdjrny-v4 style
Here’s a comparison of these models with the same prompt and seed. All but Anything v3 generate realistic images but with different aesthetics.
There are thousands of Stable Diffusion models available. Many of them are special-purpose models designed to generate a particular style. Where should you start?
Here are some of the best models I keep going back to:
Dreamshaper model is fine-tuned for a portrait illustration style that sits between photorealistic and computer graphics. It’s easy to use and you will like it if you like this style.
Deliberate v2 is another must-have model (so many!) that renders realistic illustrations. The results can be surprisingly good. Whenever you have a good prompt, switch to this model and see what you get!
Realistic Vision v2
ChilloutMix is a special model for generating photo-quality Asian females. It is like the Asian counterpart of F222. Use with Korean embedding ulzzang-6500-v1 to generate girls like k-pop.
Like F222, it generates nudes sometimes. Suppress with wardrobe terms like “dress” and “jeans” in the prompt, and “nude” in the negative prompt.
Protogen v2.2 (Anime)
Protogen v2.2 is classy. It generates illustration and anime-style images with good taste.
GhostMix is trained with Ghost in the Shell style, a classic anime in the 90s. You will find it useful for generating cyborgs and robots.
Waifu Diffusion is a Japanese anime style.
Inkpunk Diffusion is a Dreambooth-trained model with a very distinct illustration style.
Use keyword: nvinkpunk
Finding more models
You can find more models in Huggingface.
Civitai is another great resource to search for models.
- In addition to 512×512 pixels, a higher resolution version 768×768 pixels is available.
- You can no longer generate explicit content because pornographic materials were removed from training.
You may assume that everyone has moved on to using the v2 models. However, the Stable Diffusion community found that the images looked worse in the 2.0 model. People also have difficulty in using power keywords like celebrity names and artist names.
The 2.1 model has partially addressed these issues. The images look better out of the box. It’s easier to generate artistic style.
As of now, most people have not completely moved on to the 2.1 model. Many use them occasionally but spend most of the time with v1 models.
If you decided to try out v2 models, be sure to check out these tips to avoid some common frustrations.
SDXL model is an upgrade to the celebrated v1.5 model and the forgotten v2 models. Early testing results are very promising. The latest publicly available version is SDXL 0.9. Unlike the Beta version, you can download and run SDXL 0.9 locally.
The benefits of using the SDXL model are
- Higher native resolution – 1024 px compared to 512 px for v1.5
- Higher image quality (compared to the v1.5 base model)
- Capable of generating legible text
- Easy to generate darker images
How to install and use a model
To install a model in AUTOMATIC1111 GUI, download and place the checkpoint (.ckpt) file in the following folder
Press reload button next to the checkpoint drop box
You should see the checkpoint file you just put in available for selection. Select the new checkpoint file to use the model.
Alternatively, you can press the “iPod” button under Generate.
The model panel will appear. Select the Checkpoints tab and choose a model.
If you are new to AUTOMATIC1111 GUI, some models are preloaded in the Colab notebook included in the Quick Start Guide.
See the SDXL article for using the SDXL model.
Merging two models
To merge two models using AUTOMATIC1111 GUI, go to the Checkpoint Merger tab and select the two models you want to merge in Primary model (A) and Secondary model (B).
Adjust the multiplier (M) to adjust the relative weight of the two models. Setting it to 0.5 would merge the two models with equal importance.
After pressing Run, the new merged model will be available for use.
Example of a merged model
The merged model sits between the realistic F222 and the anime Anything V3 styles. It is a very good model for generating illustration art with human figures.
On a model download page, you may see several variants of the model.
This is confusing! Which one should you download?
Pruned, full, EMA-only models
Some Stable Diffusion checkpoint models consist of two sets of weights: (1) The weights after the last training step, and (2) the average weights over the last few training steps called EMA (exponential moving average).
If you are only interested in using the model, you only need the EMA-only model. These are the weights you actually use when you use the model. They are sometimes called pruned models.
You will only need the full model (i.e. A checkpoint file consisting of two sets of weights) if you want to fine-tune the model with additional training.
So, download the pruned or EMA-only model if you simply want to use it to generate images. This saves you some disk space. Trust me, your hard drive will fill up very soon!
FP stands for floating point. It is a computer’s way of storing decimal numbers. Here the decimal numbers are the model weights. FP16 takes 16 bits per number and is called half precision. FP32 takes 32 bits and is called full precision.
For deep learning models (such as Stable Diffusion), the training data is pretty noisy. You rarely need full precision when you use the model. The extra precision just stores noise!
So, download the FP16 models if available. They are about half as big. This saves you a few GB!
The original pytorch model format is
.pt. The downside of this format is that it is not secure. Someone can pack some malicious code in it. The code can run on your machine when you use the model.
Safetensor is an improved version of the PT model format. It does the same thing of storing the weights, but it will not execute any codes.
So, download the safetensor version whenever it is available. If not, make sure you download the PT files from a trust-worthy source.
Other model types
Four main types of files can be called “models”. Let’s clarify them so you know what people are talking about.
- Checkpoint models: These are the real Stable Diffusion models. They contain all you need to generate an image. No additional files are required. They are large, typically 2 – 7 GB. They are the subject of this article.
- Textual inversions: Also called embeddings. They are small files defining new keywords to generate new objects or styles. They are small, typically 10 – 100 KB. You must use them with a checkpoint model.
- LoRA models: They are small patch files to checkpoint models for modifying styles. They are typically 10-200 MB. You must use them with a checkpoint model.
- Hypernetworks: They are additional network modules added to checkpoint models. They are typically 5 – 300 MB. You must use them with a checkpoint model.
In this article, I have introduced what Stable Diffusion models are, how they are made, a few common ones, and how to merge them. Using models can make your life easier when you have a specific style in mind.