Show HN: Backprop – a simple library to use and finetune state-of-the-art models
github.comThis is a PyTorch based library my team and I have been working on for the last few months, with the goal of making finetuning and using models as easy as possible for devs, even without extensive ML experience.
We've currently got support for text and image-based tasks (classification, generation, q&a, etc.), with wrappers around models like Google's T5, OpenAI's CLIP, GPT-2, Facebook's BART, and others.
We've got some features that make deployment easy, but for full transparency, it is through a paid platform [0] we've developed that is by no means necessary to use the library.
We're happy with the progress we've made, but we're curious to hear what people think so we can keep improving.
Just commented on your last post so copying here:
This looks pretty slick! Can you give any sort of average number of seconds your pre-trained models run for? Just curious approximately how many API calls one could make on each tier given the per second usage pricing.
Thanks! Great question. The seconds vary a lot on a task, model and input basis.
For example, if you do something quick like text vectorisation on a couple of sentences then it is less than 100ms per call. That would be 10,000, 50,000 and 200,000 calls (0.0005, 0.0002, 0.0001 per additional call) respectively.
On the other end, if using GPT-2 Large to generate around 40 words then that takes around 2500ms per call. Giving 400, 2000 and 8000 calls (0.0125, 0.005, 0.0025 per additional call) respectively.
Completely understandable, and I appreciate the realistic estimates you've provided. The usage per tier definitely seems fair. I'll keep an eye on this project and hopefully circle back soon. Thanks for the reply!