Training & Personalización: LoRAs

Fine-tune the giants. Learn how to inject specific styles, characters, and concepts into Stable Diffusion without breaking the bank.

stable_diffusion_webui
1 / 8
Prompt Input
LoRALow-Rank Adaptation

AI Instructor:Welcome to Advanced Synthetography. Standard models like Midjourney or Stable Diffusion are powerful, but they don't know *your* specific style, face, or product. To fix this, we use LoRAs.

LoRA Training Tree

Unlock nodes by learning how to fine-tune styles.

What is a LoRA?

LoRA (Low-Rank Adaptation) is a training technique that allows us to fine-tune large models like Stable Diffusion without retraining the entire neural network. It injects small "adapter" layers that modify the model's behavior.

System Check

Why do we use LoRAs instead of full Checkpoints?

Why LoRAs Changed the Game

Before LoRA, if you wanted a model to generate your specific product or face, you had to perform "Dreambooth" training on the entire 2-4GB Checkpoint. This was slow, expensive, and resulted in massive files.

1. The "Sandwich" Concept

Think of the Base Model (SD 1.5 or SDXL) as the bread. It knows the basics. The LoRA is the filling—it adds the specific flavor (style/object). You can swap fillings easily without changing the bread.

2. Avoid "Fried" Images

A common mistake is using a weight of 1.0 for everything. If your image looks saturated or has artifacts, lower the weight to 0.7.

LoRA Glossary

LoRA (Low-Rank Adaptation)
A small file (approx 100MB) that contains trained weights to modify a larger model's output towards a specific concept.
prompt.txt
<lora:pixelArt_v1:1.0>
Generation Result
Pixel Art Output
Trigger Word
A specific keyword used during training. The LoRA might not activate unless this exact word is present in the prompt.
prompt.txt
sks_dog, <lora:my_dog:1>
Generation Result
Specific Dog generated