1.8-3.3x faster Embedding finetuning now in Unsloth
unsloth.aiDo the memory savings carry over to inference or is this strictly optimizing the backward pass? I'm running embedding pipelines via Celery and being able to squeeze this into lower VRAM would help the margins quite a bit.
Excited to have collabed on this! Thanks electroglyph for the contrib!