Low-rank adaptation (LoRA) is a technique for fine-tuning models that has some advantages over previous methods: