Settings

Theme

Google Colab Pro is switching to compute credits

93 points by 3wolf 3 years ago · 88 comments (86 loaded) · 2 min read


Just got this in my inbox. They haven't updated the FAQs page yet, as far as I can tell.

> Hi-

We’re improving the Terms of Service that apply to your Colab Pro or Colab Pro+ subscription making them easier for you to understand and improving the ways you can use Colab. The changes will take effect on September 29.

The [updated Terms of Service](https://research.google.com/colaboratory/tos_v3.html) include changes that will allow you to have more control over how and when you use Colab, allowing us to offer new services and features that will enhance your experience using Colab.

We will increase transparency by granting paid subscribers compute quota via compute units which will be visible in your Colab notebooks, allowing you to understand how much compute quota you have left. These compute units are granted monthly and will expire after 3 months. You will be entitled to a certain number of compute units based on your subscription level and will have the ability to purchase more compute units as needed.

Additionally, we will allow paid subscribers to exhaust their compute quota at a much higher rate. This will result in paid subscribers having more flexibility in accessing resources. Read more about these changes at our [FAQ](https://research.google.com/colaboratory/faq.html#compute-units).

If you would like to cancel your Colab Pro or Pro+ subscription, you can do that by going to pay.google.com and clicking Subscriptions and services. If you have any trouble canceling, you can email colab-billing@google.com for assistance. Please include an order number from one of your receipt emails if you email us for assistance.

-The Colab team

derefr 3 years ago

You can really tell, in the comments sections of changes like these, who is speaking from the perspective of having a professional/business vs. a personal use-case.

Individuals tend to be upset; while professionals are happy that individual free-riders will no longer be sucking up undue amounts of compute power, and so QoS on the system will improve for them.

  • TigeriusKirk 3 years ago

    I'd think anyone trying to run a business off a $10 or $49 tier is the one sucking up undue compute power.

    • derefr 3 years ago

      A business wants to pay (at least) what something costs, because 1. they’re making money themselves from the result, and 2. they don’t want the thing they depend on to stop being offered. You’re not a free rider per se if you want a cost-plus pricing model but it’s just not on offer. (In that case, it’s instead the provider’s fault for not capturing your value surplus.)l similar to how, in scalping, it’s the original seller’s fault for not charging what the market will bear.)

      • ponow 3 years ago

        > similar to how, in scalping, it’s the original seller’s fault for not charging what the market will bear

        Absolutely. Unfortunately, people hate scalpers, irrationally if you take a broad view.

      • version_five 3 years ago

        Google's business model is exploiting people's data to sell advertising. If you use a google "product" that's not advertising, you're not paying what it costs, at least in money. In line with what you're saying, I want to compensate a cloud provider for their services in a way that makes me a customer instead of a vector for advertising or data exploitation.

        • derefr 3 years ago

          Google Cloud, in fact, charges you what things cost (with markup, and economies of scale), just like AWS does. There are no loss-leader Google Cloud services. Google has separate b2b and b2c business models.

          (I should know; we are a GCP customer, and we pay them quite a lot, in fact. They pinch every single penny, as they should. Nothing costs less than it would to do it on bare metal ourselves; we only save in CapEx in that we don’t have to pay ourselves for as many servers as are required to run something like e.g. reliable object storage, or BigQuery’s hot-idled compute.)

        • XorNot 3 years ago

          The mistake you're making is thinking that paying them would mean they stop doing this. Why would they? You're not aware of how they're doing it now, why would money being involved change it unless there was a risk of you leaving the service?

          The problem with advertising is that paying for a product just proves you have disposable cash to pay, which in turn makes you a more valuable advertising target.

          Amazon, for example, aggressively monitor the types of services people are building that run on AWS, and then to launch competing products as "native" AWS services - knowing that to the rest of their customers, buying the "AWS native" thing is much more appealing then dealing with any 3rd party vendor.

  • Jugurtha 3 years ago

    I've explored Colab users as a target audience for our product, especially given the fact that practically all the posts on the GoogleColab subreddit complain about how bad it is. Even those with the Pro+ or Pro tend to revert to the free Colab offer because there's no transparency.

    What I understood from my interactions is that they complain but will not use a paid product because even though they're paying from nothing to $49, the actual resources used is in the $800/month ballpark (notebooks running 23 hours per day, seven days a week, using a GPU).

    These are clearly hobbyists. The pros had different problems such as not being able to pay for it from certain countries.

    In other words, there are people who need a notebook to run and not crash and willing to pay for that and there are others working on toy projects/individual pet projects or projects with no real stakes who'll complain about it but will not switch because another company will not really subsidize usage.

    Yes, there are other companies that offer notebooks, but our product was for professionals in the ML field, and there's much more to ML project than running a notebook (real time collaborative notebooks, automatic experiment tracking, plugging compute from any cloud provider, one click model deployment, object storage like a filesystem, live monitoring dashboard for deployed models, and more).

discordance 3 years ago

Have been experimenting lately with GPUs off vast.ai. Has worked well for experiments with Stable Diffusion and is cheap!

Any other suggestions for where to rent cheap GPUs? - i've heard about Hetzner (https://www.hetzner.com/sb?search=gpu), but they are 1080s.

  • frederickgeek8 3 years ago

    I tried use Paperspace Gradient "Growth" plan for Teams. The product was so buggy it was unusable. Their support and engineers were wonderful to talk to, but they admitted that there are a lot of features that just don't work and they don't have the bandwidth to fix them. It seems like an early product and I wouldn't recommend it if you need something dependable, at least not now. I would love to work with them in the future if the stability improves.

  • etaioinshrdlu 3 years ago

    Vast.ai told me as far as they know they are the usually cheapest option, but that sometimes lambda offers something similar or slightly lower.

  • sabalaba 3 years ago

    Lambda has $1.10/hr A100 instances. That's less than half the price of GCP on demand. https://lambdalabs.com/service/gpu-cloud

    • mark_l_watson 3 years ago

      If I may ask, and if you use Lambda Labs cloud GCP, how do you like it? I have visited their pricing page about ten times, their price for a single A100 is very good, but I haven’t talked to any customers. Do you basically get VPS with a GPU and all the usually software installed, and SSH/Mosh into it?

      • why_only_15 3 years ago

        Yeah I found Lambda super easy to use, especially compared to GCP etc. It took me about 5 minutes to get a model up and running. They don't support launching machines with docker images or kubernetes or anything like that though (problem if you want to run a business on it) and recently they are extremely supply constrained (only available machine type 8xA100, no A6000s or V100s).

  • AdamJacobMuller 3 years ago

    Paperspace, Vultr

  • zhl146 3 years ago

    Feel free to check out https://www.runpod.io/ :)

Aeolun 3 years ago

Am I the only one that thinks it’s nice they’re being explicit about how much they’re giving you? I found the original ‘however much we have available and feel like giving to you’ plan limit highly unprofessional.

I got an A100 after I susbscribed, so it worked out for me, but still annoying you don’t know what you’ll get.

mark_l_watson 3 years ago

I deeply appreciate Colab. I bought a nice home GPU rig a few years ago, but seldom use it. When I am lightly using Colab I use it for free and when I have more time for personal research the $10/month plan works really well. I can see occasionally paying for the $50/month plan as the need arises in the future.

I am working on an AI book in Python. (I usually write about Lisp languages.) About half the examples will be Colab notebooks and half will be Python examples to be run on laptops.

In any case, I like the soon to be implemented changes, sounds like a good idea to get credits and see a readout of usage and what you have left.

  • cperry 3 years ago

    Thanks! I think people will much prefer this over the current opaque system.

    I read every feedback submission in Colab so if you ever have feedback you'd like addressed, send away.

    • mardifoufs 3 years ago

      Thanks a lot for collab, I have access to really powerful compute from my job yet I still find myself using collab a lot. The only problem I had with it was the very vague FAQ and the spotty resource allocations but the new faq is much better. Thanks again!

      • cperry 3 years ago

        I hope it works better for you, please send feedback if it doesn't!

        • anymoonus 3 years ago

          Colab is awesome, thank you.

          I'll take the opportunity to request better editing ergonomics: the ability to connect from Jupyter/notebook-supporting editors and IDEs; the ability to open/edit .ipynb files from the local disk and/or Github without having to first put them on Google Drive.

          Colab/Jupyter and friends are reinventing many wheels around editing code, and it would be nicer for them to support tools like Jupytext.

goodfight 3 years ago

Reeling us in with unlimited and locking it down. Classic

  • cperry 3 years ago

    PM for Colab - I wrote the email.

    No intention to lock it down? Whatever that would mean? We ensure notebooks are totally portable for any other Jupyter install you want to move to.

    This change is about laying the groundwork for increased transparency for your paid compute consumption, vs. the current model of kind of hiding that away.

  • xkapastel 3 years ago

    It wasn't unlimited though, there was always a quota. It just wasn't visible.

  • TorqueFilet 3 years ago

    Used Google Colabs for the last 8 months, will fully divest from them with this change...

    • desindol 3 years ago

      If you used it in the last 8 months and didn’t get restricted you won’t be now. Reading comprehension and stuff.

      • TigeriusKirk 3 years ago

        We actually don't know. The email doesn't indicate if the available compute will stay the same.

        • desindol 3 years ago

          Read it again. If they would change the compute amount they would have to tell you that’s how contracts work and no google is not dumb enough to do it behind your back………

    • cperry 3 years ago

      The change is meant to provide increased visibility in your paid compute consumption.

      • LuciferSam86 3 years ago

        Yeah, I was looking on Google Colab Github page. There were some issues about "Hey! I got a Pro(+) account, and I'm keeping to be limited, but I don't know how I can check how much resources I still have" kind of messages.

        I think this is one of the reasons Google is switching to this model.

        • cperry 3 years ago

          > I think this is one of the reasons Google is switching to this model.

          By far, the biggest user complaint I get is "why can't I access a GPU" which this change will address.

  • geogra4 3 years ago

    Yep, similar to what they did with Google photos/gmail

  • mardifoufs 3 years ago

    What does this mean in this context? What's being locked down?

frognumber 3 years ago

I like the transparency, but this doesn't feel like the right way to do it. Computation should be free (or nearly free) if there's idle capacity, paid if Google is near capacity, and expensive/bidding if Google is above capacity.

Flat compute units seem simple, but result in a lot of waste.

  • skybrian 3 years ago

    There might not be servers going idle. Google has plenty of batch jobs they can run in lower-traffic times, both internal and external.

    And if it does go idle, it saves energy, which costs money. At scale, compute isn't free.

  • moralestapia 3 years ago

    >Computation should be free (or nearly free) if there's idle capacity

    I guess nothing stops you from buying infra and offering it for "free (or nearly free)".

    • geodel 3 years ago

      Agree. I just do not like restaurants which charge full price of burgers to me when there are clearly more patties on grill than customers at that time.

      • mobiuscog 3 years ago

        So you also wouldn't expect to be paid when a company has less work than it has employees during a week ?

        • geodel 3 years ago

          That would be logical conclusion. I think people looking for (nearly) free collab would agree.

      • frognumber 3 years ago

        There are stores in my area which sell, at a discount, overstock food products before they expire. It costs less stores less than simply wasting food, and people who aren't picky about what they buy are able to get quality food at a low price.

        And to the employee point, overtime (and contractors) typically get paid more to fill gaps when demand exceeds supply. In some cases, when there is reduced demand, employees are offered furloughs. I recall during a recession when I was a child, my Dad spent about a month at home, but was still paid some fraction of his normal salary to stay on payroll.

        Adjusting pricing to supply-and-demand isn't exactly crazy, freeloading, or communism. If co-lab were designed to generate a profit, the same type of pricing as AWS and GCP do would make sense. Since it seems designed to position Google in some position in the ML ecosystem, free makes sense.

        As a footnote, I don't use co-lab. That's not because of pricing, but because in free offerings like these, I am the product. That's not always a bad deal, but it is for me for the type of work I do. I like running ML models locally (I have a pretty fancy GPU) using my employer's in-house cluster, or paying for AWS. I don't discourage others from using co-lab (that decision is specific to what I do).

        I'm not sure why this discussion is so hostile, and reads so much false subtext into statements which shouldn't be controversial at all.

  • johndfsgdgdfg 3 years ago

    > Computation should be free (or nearly free) if there's idle capacity

    HN used to be a place for interesting discussions. Now it's a grievance forum for entitled freeloaders.

  • knorker 3 years ago

    How do you mean? Should GCS storage also be free, unless Google is nearing storage capacity?

    • frognumber 3 years ago

      That's a little bit different. I would assume Google grows GCS over time to meet demand. Most of the demand is static. If Google needs 1PB of storage, they will probably have 1.01PB, and the amount won't go down.

      Compute is dynamic. You might be above capacity for Christmas shopping, and below capacity at 4am in the middle of the weekend.

      By varying pricing, you can be more efficient. People who can will smooth out that load. If I don't need to run something during peak hours, I might wait until off-peak. Google needs less capacity. Everyone comes out ahead.

      For a profit-making project, dynamic pricing makes sense. I suggested free since the primary goal of co-lab isn't to make money (but they also don't want to subsidize it too much, so they do need to charge).

      • knorker 3 years ago

        > By varying pricing, you can be more efficient.

        AWS and GCP have spot pricing on VMs, so they do have products that do this.

        Maybe that's enough. When Colab load goes low, they can turn down a bunch of Colab tasks, and sell the freed capacity as Spot VM / preemptable VMs.

        I'm sure there are many big companies out there that essentially have a standing bid for any compute cheaper than $X, and will soak it up as Spot VMs.

        Not every GCP product needs to have spot pricing to be efficient. In fact if you silo capacity per product then you'll be less efficient. E.g. someone is willing to pay for spot VMs, but you only have spot Colab available.

  • Mathnerd314 3 years ago

    A computation could use fewer compute units if resources are idle. There isn't enough information to make a judgment.

fibrennan 3 years ago

At Paperspace we've long offered an alternative to Google Colab that includes free CPU, GPU, and (recently released) IPU machines.

Free notebooks can be run for 6 hours at a time.

More info available in docs: https://docs.paperspace.com/gradient/machines/#free-machines...

moconnor 3 years ago

At last! I love Colab but the vague promises around availability and quota made it impossible to recommend for my team to use professionally.

I even tried and failed to get it up and running with a Google cloud GPU recently, before just switching to Lambda which worked first time (but had since hit availability issues).

stableskeptic 3 years ago

Question for the Colab team:

The restrictions listed at https://research.google.com/colaboratory/tos_v3.html differ slightly from the limits listed at https://research.google.com/colaboratory/faq.html specifically tos_v3.html does not mention these items from the faq

    * using a remote desktop or SSH
    * connecting to remote proxies
I can appreciate why those were added - I've read posts and notebooks explaining how you can use ngrok or cloudflare to do those things in violation of the restrictions in the faq and clearly many people aren't using Colab as intended.

Speaking as someone who has been playing around with the Colab free tier with the expectation of moving to a paid service once I know what I'm really doing, I'd like to know if it's likely these restrictions will be eased a bit with the move to a compute credit system.

I'm still learning and haven't had a need to do those things yet but I believe remote ssh access would greatly simplify managing things. The Jupyter interface and integrated Colab debugger are good for experimenting but I'm worried that as I get closer to production I'll need a way to observe and change the state of long-running Colab processes the way I could with ssh, ansible or other existing tooling.

Clearly I can build that myself or use something like Anvil Works https://anvil.works but that's time and effort I'd rather avoid if possible. So I'm hoping that the Colab team will ease the SSH restriction for people like me who want to use it for more traditional ops/monitoring of long running tasks.

Do you anticipate any change or easing of the SSH restriction?

  • cperry 3 years ago

    I do not anticipate changes in the short term, but I am always open to changes in the medium term.

    Both of those address angles of abuse that I don't want to discuss in big forums, and go counter to interactive notebook compute, our top priority.

etaioinshrdlu 3 years ago

Lambdas labs has run out of GPUs to rent lately. I think it’s too many people running SD.

  • LittlePeter 3 years ago

    Barely two weeks after Stable Diffusion release, we use SD as its acronym? That's fast.

roboy 3 years ago

I really like the increase in transparency, I found it somewhat disturbing to pay for what feels like a random amount of stuff. How should I know if I need Pro or Pro+ if there is no estimate out there what either might get me. The update does not seem to change that though. I would love to have a distribution plotted of how much compute I might expect. Or at least Min/Average/Max run time until disconnect (rn. only Max is known).

  • cperry 3 years ago

    I aspire to offer that level of transparency. I am foiled by (1) GPU prices can change randomly on me, and (2) it's hard to convey to a user pricing without giving them a huge incomprehensible price sheet.

    All is not lost though, I've got a few irons in the fire that should help resolve those points of feedback over the coming year.

    In the meantime, you can always just buy a GCP VM and you have all the certainty you want: https://research.google.com/colaboratory/marketplace.html I find most people don't want that because it's a pain that Colab Pro/Pro+ largely abstracts.

  • visarga 3 years ago

    They can benchmark a few architectures (ResNet50, BERT) and tell us how many times we can train a model on a specific level of subscription.

minimaxir 3 years ago

From the Google Colab product lead:

> This has been planned for months, it's laying the groundwork to give you more transparency in your compute consumption, which is hidden from users today.

https://twitter.com/thechrisperry/status/1564806305893584896

sabalaba 3 years ago

For those affected and wanting to run your stable diffusion notebook more, you can always spin up a notebook on Lambda cloud with A100s for only $1.10/hr. PyTorch, TensorFlow, and Jupyter notebooks are pre-installed:

https://lambdalabs.com/service/gpu-cloud

endisneigh 3 years ago

Pro tip: if it costs someone something, it’s not unlimited (this is true even if you’re paying a flat fee).

  • doesnt_know 3 years ago

    Within capitalism, this is everything that exists. The phrase should effectively be banned in all marketing and service contexts across all industries.

frederickgeek8 3 years ago

I just subscribed to Colab Pro+ hours before this announcement (-‸ლ)

  • cperry 3 years ago

    I'll refund you myself if you hate it. This change will give you more transparency in your paid compute consumption, though won't land for 30 days.

    • frederickgeek8 3 years ago

      Thanks for the response! I’m glad Colab is sorting out the pricing vagueness. I’m hoping that we get more detailed information about the changes before the launch so I (and my team) can better prepare. Do you know approximately when information about the new pricing tiers will be launched?

      • cperry 3 years ago

        Sept 29 we'll have an announcement. Working on a blog post now that hopefully (?) will resolve some open questions.

TigeriusKirk 3 years ago

Sigh. Unlikely these changes will be of benefit to us users.

  • cperry 3 years ago

    This increases visibility into your paid compute consumption. Net win for folks. Feature doesn't launch for 30 days, but I can't make any changes like this without changing ToS, which I have to pre-announce.

    • TigeriusKirk 3 years ago

      I've been using Colab for a few months as I dive into my personal ML education. Free at first, Pro tier now. It's very handy and I appreciate the product.

      I suppose my concern is this - Was Colab using "compute units" internally for limits already? Or was it "as available"? And if so, will my compute units allocation be what it has been or will it be decreasing? To be honest, when I read the email today, I assumed the limits would be tightening. It feels like the sort of thing that happens when a company tightens up on a product. Not really a reflection on you or your product, just past experiences.

      • cperry 3 years ago

        Thanks!

        Colab has always had a notion of quota for all users. This notion was implemented in a hard to describe way, but we would throttle you as you used more compute without making it clear how or why. This change will make the quota allocation transparent and visible for paid users.

        Any limits we're using here are just to ensure you don't use more GPU $ than what you've paid for.

        • Ularsing 3 years ago

          > Any limits we're using here are just to ensure you don't use more GPU $ than what you've paid for.

          So naturally, any unused balance will be refunded, right Chris? You wouldn't want to start overcharging some users while fixing the "bug" or "loophole" that undercharged others.

          • cperry 3 years ago

            It's a fair critique and one we have discussed. With this change we are extending the lifetime of the units to three months to give users more flexibility. This is a net user improvement, today if you pay for a subscription any "unused" quota disappears after a month.

    • PotatoMunchkin 3 years ago

      Just wanted to say thank you for the transparency in posting here; does this mean that for paid users, compute consumption limits will stay the same or go up?

      • cperry 3 years ago

        Thanks!

        Grand majority of paid users will see no impact to their limits. We do have some bugs where a small fraction of paid users today use more than their quota (Pro+ users burning $100 worth of an A100 when paying us $50), and those folks will see limits, yes, as we fix some loopholes.

        I hesitate to post this given the anger I've seen about any changes, but I can't subsidize paid tiers, that gets expensive fast. As best I can tell from looking at prices for competitors in this space, Colab will still be the best value for money. If folks find otherwise, please do tell me where the competitors are sourcing those GPUs for cheaper.

        It goes without saying we remain committed to supporting the free tier.

porker 3 years ago

Good. Hopefully this will reduce the randomness of type-of-GPU assignment on the Pro plan.

I fine-tuned a model on Colab Pro earlier this year and having to launch and quit 6 or 7 times to get a faster graphics card to ensure it completed within the time limit sucked.

Hope this will give more transparency into whether you are assigned a whole card or a virtual slice of one. Something I could never work out before!

  • cperry 3 years ago

    You're always getting the whole card, never a slice fwiw. We haven't found a GPU virtualization solution that has a strong enough security boundary between slices, so we keep you isolated.

    And yes, hoping to give you more control over chip type too, stay tuned.

    • porker 3 years ago

      Thanks that's good to know.

      I would have upgraded to Pro+ if I had confidence it would speed up the process, but the promises of what you get (beyond "runs in background") were/are so vague I couldn't tell if it was worth it. I was only using 50-80 hours a month and it sounded like a plan aimed at more usage.

      Google was/is leaving money on the table with the old scheme.

rahidz 3 years ago

Right when I started using it for StableDiffusion. Lovely.

  • theamk 3 years ago

    That (people using a lot of resources) is probably why the change was made.

    • cperry 3 years ago

      My peeps, I cannot get legal counsel for seventy billion countries to turn around a ToS update this fast. This has been planned for months, we're looking to increase transparency in your paid compute consumption.

  • desindol 3 years ago

    The restriction was always there the only thing that's changing is that you can see it now…

  • LuciferSam86 3 years ago

    Would be nice a plan tailored for AI Image Generation and leave the more powerful GPUs for heavier jobs.

    Like, I don't know, 15$ per month. Still cheaper than buy a GPU VPS.

boredemployee 3 years ago

For those looking for good alternatives, I recommend vast.ai

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection