Training & Personalization: LoRAs

Go beyond generic prompts. Learn to train custom models on your own datasets, faces, and styles.

kohya_config.json
// LoRA Configuration
"network_module"
: "networks.lora",
"learning_rate"
: 0.0001,
"train_batch_size"
: 1,
"mixed_precision"
: "fp16"
training_config.json
1 / 9
Files
git
Config Editor
🧠
FineTuning

Guide:Welcome to Model Personalization. Standard models like Midjourney or Stable Diffusion know 'general' concepts, but they don't know YOUR product, YOUR face, or YOUR specific artistic style. We fix this with Training.


AI Model Training Mastery

Unlock nodes by learning to train LoRAs and customized models.

Step 1: LoRA Fundamentals

LoRA (Low-Rank Adaptation) works by freezing the pre-trained model weights and injecting trainable rank decomposition matrices into the Transformer layers. This means you don't retrain the whole brain, just a small "adapter" for it.

Knowledge Check

Why is LoRA preferred over full fine-tuning for most creators?


Training Lab Community

Recent Models Shared

Cyberpunk_V3 LoRA (SDXL)

Posted by: NeonDreamer

Help: Loss is NaN after 100 steps?

Posted by: NewTrainer

LoRA Peer Review

Upload your .safetensors file and sample images for feedback on flexibility and overfitting.

The Art of Fine-Tuning: Custom Models

AI Instructor

Dr. Elena Tensor

AI Art Director & ML Researcher.

Generative AI models are powerful, but generic. To achieve true brand consistency or replicate a specific artistic style, prompting alone is not enough. You must train the model.

1. LoRA vs. Dreambooth

Dreambooth fine-tunes the entire model (2GB-6GB). It is powerful but heavy. LoRA (Low-Rank Adaptation) inserts small, trainable layers into the model. The result is a tiny file (50MB-150MB) that can be plugged into the main model to change its behavior instantly.

2. The Dataset is Everything

Garbage in, garbage out. A dataset of 15 high-quality, consistently cropped images is better than 100 blurry ones.

❌ Bad Dataset

Images with watermarks, inconsistent lighting, mixed aspect ratios without bucket resizing, and vague captions like "image of thing".

✔️ Good Dataset

High-res images, diverse angles of the same subject, detailed captions describing background and lighting (so the AI learns to separate subject from environment).

3. Understanding "Repeats" & "Epochs"

Repeats is how many times the AI looks at an image before moving to the next one in the folder. An Epoch is when the AI has seen *every* image in your dataset (times the repeats).

Rule of Thumb: For a Face LoRA, aim for roughly 1500 total steps. If you have 15 images and set 10 repeats, that's 150 steps per Epoch. You would need about 10 Epochs.

AI Training Glossary

LoRA (Low-Rank Adaptation)
A training technique that adds small, trainable layers to the model instead of retraining the whole thing. Fast and lightweight.
config.json / prompt
<lora:my_custom_style:0.8>
Visual Concept
+
=
Epoch
One complete pass through the entire training dataset. Too few epochs = underfitting. Too many = overfitting.
config.json / prompt
"max_train_epochs": 10
Visual Concept
Instance Prompt
The unique keyword used to trigger your trained concept. It should be a rare token (e.g., 'ohwx' or 'sks').
config.json / prompt
"instance_prompt": "photo of sks dog"
Visual Concept
sks dog
Regularization Images
Generic images of the class (e.g., generic dogs) used during training to prevent the model from forgetting what a 'dog' is.
config.json / prompt
"reg_data_dir": "class_images/dog"
Visual Concept