LoRA Fine-Tuning FLUX.1-dev on RTX 4090 with QLoRA
AI Impact Summary
The team is performing LoRA fine-tuning of the FLUX.1-dev model on consumer hardware, specifically an NVIDIA RTX 4090, using QLoRA techniques. This involves quantizing the base model to 4-bit using bitsandbytes, training LoRA adapters with a rank of 'r', and employing optimizations like gradient checkpointing and latent caching to minimize VRAM usage. The goal is to adapt the model's style to the artistic style of Alphonse Mucha, demonstrating efficient fine-tuning on relatively modest hardware.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- info