Low-rank adaptation (LoRA) is among the most widely used and effective techniques for efficiently training custom LLMs. For those interested in open-source LLMs, it's an essential technique worth familiarizing oneself with. Last month, I shared an article with several LoRA experiments, based on the open-source Lit-GPT repository that I co-maintain with my colleagues at Lightning AI. This Ahead of
![Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation)](https://cdn-ak-scissors.b.st-hatena.com/image/square/e639229fab41d88de01201184257d6d8b74be0a2/height=288;version=1;width=512/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Fw_1200%2Ch_600%2Cc_fill%2Cf_jpg%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Cg_auto%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F5dfbd169-eb7e-41e1-a050-556ccd6fb679_1600x672.png)