Here is how to use LoRA models with Stable Diffusion WebUI – full quick tutorial in 2 short steps! Discover the amazing world of LoRA trained model styles, learn how to utilize them in minutes and benefit from their small file sizes and control that they give you over the image generation process. Along the way you’ll also learn what exactly LoRA models are, and how do they differ from most traditional Stable Diffusion checkpoint fine-tunings. Let’s begin!
- What are the LoRA models? – Simply Explained
- LoRA Models vs. Base Models/Checkpoints
- How To Use LoRA models in Automatic1111 WebUI – Step By Step
- Stable Diffusion LoRA Model Prompt Examples
- Best Sources for Free LoRA Models
- How To Train LoRA Models By Yourself?
What are the LoRA models? – Simply Explained

As of now, we have quite a few different ways of training and fine-tuning Stable Diffusion models. These include training the models using Dreambooth which is available as a base SD functionality extension, making use of textual inversion techniques, hypernetworks, merging checkpoints with different characteristics together and finally, utilizing LoRa (Low-Rank Adaptation) to achieve reasonably small files containing pre-trained styling info ready to be used alongside any existing Stable Diffusion checkpoint.
Low-Rank Adaptation is essentially a method of fine-tuning the cross-attention layers of Stable Diffusion models that allows you to apply consistent image styles to your Stable Diffusion based image generations. You can learn much more about the technical process involved here.
LoRA models which you can find in the very same places that offer various Stable Diffusion checkpoints available for free download, have one big advantage over regular SD checkpoints. That is, they are relatively small in size.
Typically, a LoRA trained models available on civit.ai will range in size somewhere from 50 Mb for up to around 1GB, although this of course always depends on the amount of data inside the model itself.
One prominent characteristic of LoRa trained models however would be the fact that they need to be used alongside a base Stable Diffusion model to generate images. You can’t rely on a LoRA model itself to generate an image.
Training LoRA models is also more memory efficient and it requires way less RAM than for example fine-tuning Stable Diffusion models using Dreambooth.
LoRA Models vs. Base Models/Checkpoints
A question often asked by beginners is: how do LoRA models actually differ from Stable Diffusion models/checkpoints. As we’ve already touched upon that topic in the previous paragraph, here is the rest of the answer.
Stable Diffusion checkpoints often simply called “models” are simply variations on the base Stable Diffusion 1.5 or 2.1 models fine-tuned on different types of imagery. You can find various different Stable Diffusion checkpoints online available for free download on sites such as civit.ai, or huggingface.co
Different Stable Diffusion checkpoints usually have different designated uses, as they are fine-tuned on different image styles. For instance you can find checkpoints that are best used for generating realistic photos, anime-styled images, pixel art characters and much more.

Stable Diffusion models/checkpoints are most of the time files that are rather big (a few gigabytes in size). In Automatic1111 WebUI you can import and use different checkpoints simply by putting the checkpoint files inside the models folder and selecting your desired checkpoint/model inside the WebUI before generating a new image.
LoRA models are essentially, as we’ve already said, Stable Diffusion cross-attention layer fine-tunings that are typically smaller in size than basic Stable Diffusion model checkpoints, and that need an actual Stable Diffusion checkpoint to be used alongside them. They can help you achieve different consistent image generation styles depending on the LoRA you choose.
While Stable Diffusion checkpoints can be used without any additional data, a LoRA model has to be used alongside a arbitrarily chosen Stable Diffusion model/checkpoint to generate images.
How To Use LoRA models in Automatic1111 WebUI – Step By Step
The Automatic1111 Stable Diffusion WebUI has native LoRA model support, so you can use your newly downloaded LoRA models without installing any additional plugins, extensions or addons. And so, here is the quick step-by-step tutorial on using LoRA models in the SD WebUI.
To make use of LoRA models, the only two things you need to do is:
- Put your LoRA model files inside this catalogue: ~stable-diffusion-webui/models/Lora. The exact filepath will depend on the directory you’ve set up your WebUI in.
- After creating your prompt, click the Lora tab in the menu you can find right under the negative prompt window and finally choose your desired LoRA model from the list. After you do that, the LoRA activation prompt phrase should show up in the prompt window like so: <lora:lorafilename:multiplier>, where the lorafilename is the name of your downloaded LoRA file, and the multiplier is the desired weight of the LoRA styling. The multiplier should be set between 0 (no LoRA style effect visible) to 1 (chosen LoRA in full effect).

If no LoRA models show up after you click the LoRA library button, make sure that you’ve put your downloaded LoRA models in the right folder. They should be placed in ~stable-diffusion-webui/models/Lora. Once you have moved them over to this location, simply give the WebUI a restart, or refresh the LoRA list using the refresh button in the Style Library itself.
Let’s move on for some examples of LoRA model usage in Stable Diffusion WebUI!
Stable Diffusion LoRA Model Prompt Examples

Here are two examples of how you can use your imported LoRa models in your Stable Diffusion prompts:
- Prompt: (masterpiece, top quality, best quality), pixel, pixel art, bunch of red roses <lora:pixel_f2:0.5>
- Negative prompt: (worst quality, low quality:2)
- LoRA link: M_Pixel 像素人人 – Civit.ai – Pixel art style LoRA.
- Prompt: high quality, 3d render, blender render, one single metal cube in an empty room, <lora:stylized_3dcg_v4-epoch-000012:1>
- Negative prompt: (worst quality, low quality:2)
- LoRA link: Stylized 3D Model LoRA – Civit.ai – 3D rendered image style LoRA.
And here are the exact settings used for generating images in both of our examples:
- Steps: 34, Sampler: Euler a, CFG scale: 6.5, Seed: 1922503763, Size: 512×768, Model hash: c5b6055a84, Clip skip: 2, ENSD: 31337
Remember that both your prompt keywords and the multiplier value at the end of your LoRA activation will affect the image generation process. It’s best to use prompt keywords that are related to your chosen LoRA style to assure more consistent styling of your newly generated images.
Another great thing is that you can also use LoRA models for both for the img2img functionality of the Stable Diffusion WebUI, and for inpainting. Although this requires modifying a few different settings to achieve satisfying results (for example the denoise level in the img2img tab), it can yield great results when you get experiment with it for a bit!
Best Sources for Free LoRA Models

Here are the best sources we’ve found for downloading free LoRA models for Stable Diffusion. Feel free to browse through these sites to find the LoRA models with the image styles you need. Lots of great stuff here!
- Civit.ai – lots of great compact LoRA models, mostly anime-style character related fine-tunings.
- Huggingface.co – a more diverse set of LoRA models, a great collection worth exploring.
For now, these two places are the only ones with a reasonably sized libary of different LoRA styles. When another big one emerges, we’ll add it to this list!
How To Train LoRA Models By Yourself?
First of all we have to say that training your own LoRA models is for now a rather complex and lengthy process, and therefore it’s outside the scope of this little article. However, we won’t leave you empty handed!
Here is our short and straight-to-the point guide that will let you begin training your first LoRA model in around 20 minutes! – How To Train Own Stable Diffusion LoRA Models – Full Tutorial!
If you’d like to check out other, a bit longer and more in-detail LoRA training guides, here are the best resources we found on training your own LoRA models with a short description of what you can expect when getting into each of these. Happy training!
- LoRA training guide Version 3 on Reddit / Imgur – The whole process explained in a long infographic – pretty neat.
- LoRA Training Guide Rentry – The lengthy training process explained in reasonably simple terms.
- “The Other LoRA Training Rentry” – A guide that’s a little bit longer, a different set of LoRA training methods.
Good luck, and as always, we hope we were able to help!