Google Cloud: $72,000 bill overnight
theregister.comDupe (just a restatement of the original blog posts): https://news.ycombinator.com/item?id=25372336
> The ex-Googler reflected that he missed the possibility of pages that link back to each other, causing "infinite recursion."
Although tangential to the billing issue, this is reckless. If you’re building a crawler of any kind, please, please, please prioritize ensuring this doesn’t happen so I don’t have to wake up at 3 AM.
I run the infrastructure for a moderate-sized site with probably about a hundred million pages or so. We can handle the HN hug-of-death just fine. But poorly-made crawlers that recurse like this? They’re increasingly problematic.
If your solution to fixing your crawler is “throw more concurrency at it and ignore the recursion,” and suddenly your requests start timing out, that’s a pretty damn strong hint that you’re ruining someone’s day.
From my perspective, this will look like an attack. I’ll see thousands of IP addresses repeatedly requesting the same pages, usually with generic user agent headers. Which ones are actual attacks, and which are just poorly-made crawlers? Well, if you’ve got a generic user agent string that doesn’t link to a contact page, and you’re circumventing rate limiting by changing your IP address, and you had the bright idea to let your test code run overnight, I’m going to treat it as an attack. At 3 AM, I’m not inclined to differentiate between negligence and malice.
This is happening more and more often, and I partially blame it on the ease of “accidentally” obtaining a ridiculous quantity of cloud resources. People deploy shoddy test code and go to bed. They turn it off in the morning when they see the bill.
It’s become so prevalent that our company has come up with an internal term for these crawlers that spin up a new thread/container for every page: snowballing crawlers.
Save a sysadmin: don’t snowball.
Oh, and include a useful user agent header so we can contact you instead of your cloud provider.
Also - as someone with a ton of experience on the other side of this coin:
Puppeteer etc. are nice and all but if you can get away with raw HTTP requests grabbing and parsing the HTML without pulling down stylesheets, JS, etc. do it. It is WAY more efficient than requesting the full overhead for the user experience from these folks and threading out 5-10 workers to gracefully crawl a site this way doesn't typically cause things to melt down on your target's end.
You may be saying "well I need a browser-stack or evaluated JS to do my work" and you may be right... but honestly though 90% of this stuff is reverse-engineer-able with Charles Proxy and some basic webdev experience. Heck - I've even sandboxed JS from a target's site to generate tokens/etc to cut down on repeat requests. Even CAPTCHA stuff can easily be done without having to pull down full UIX overhead these days.
---
"Save a sysadmin: don’t snowball."
Implement thread limits, rate limiting, throttling, intelligent caching, and try to fit within your target's hosting capabilities without being disrespectful. Often I will "smear" large jobs over weeks worth of time so that it's only a trickle of traffic here and there (and to also fly under the radar... sorry).
Also - on the custom UAS: Unless you're trying to make it easy to get blocked/identified then don't take this advice. Let's face it - this is a gray area for most. The best way is to not "snowball" and to make your scrapers indistinguishable from a reasonable stream of real users from real networks. I would never expect a sysadmin to contact me because frankly they aren't paid to.
---
One last thought - the people who are out there writing these bots/crawlers/etc. are often the lowest common denominator. They're the type that will get something "working" and hurry onto the next job because the nature of the work tends to be a ton of low-paid contract stuff. Also, at almost every place I've worked at in ecommerce that has scraping involved it's the bottom-rung dev talent that's assigned to the work.
Sucks, but near-100% I attribute your "snowball" situation to that.
> Also - on the custom UAS: Unless you're trying to make it easy to get blocked/identified then don't take this advice.
I can’t speak for other sites, but we’re pretty good at picking up on crawlers that don’t have a unique UA. The problem is that we’re going to have a hard time differentiating your well-behaved crawler from more malicious crawlers, and you’re going to get caught in the crossfire.
> if you can get away with raw HTTP requests grabbing and parsing the HTML without pulling down stylesheets, JS, etc. do it.
If you combine that with the lack of an identifying UA, there’s unfortunately a good chance you’ll get caught in the crossfire during an actual attack. That being said, it’s good advice otherwise. If you’re trying not to be identified as a crawler, it’s really going to stand out, though.
> I would never expect a sysadmin to contact me because frankly they aren't paid to.
I am. Furthermore, as long as you’re being transparent about your activity (see: UA), I don’t mind working with you instead of your provider. I understand that writing good crawlers is a learning experience; mistakes do happen. When I send abuse reports, usually people just get a slap on the wrist, but not everyone is that lucky.
But, if your UA has contact info, I can:
1. Easily rate limit or block you until the issue is resolved
2. Contact you directly, explaining exactly what’s wrong
3. Easily unblock you once it’s fixed
Sure, I’m not going to be happy about it, but I’m going to be a lot happier than if you try to blend in—a situation in which I’m not going to have any sympathy.
Unfortunately, most sites don’t respond that way and would rather just block anything remotely suspicious. But since you can always change your IP address, maybe try with an identifiable UA first—please? :)
Edit: Also, a few recommendations to add:
1. Be prepared to handle obscure HTTP status codes. 503 indicates you need to back off. Frequent 500, 502, or 504 means the same thing. 429 and 420 mean you’re being rate limited; slow down. 410 means you should stop requesting the given URL. 400 or 405 means you probably have a bug. Any unrecognized 4XX or 5XX error should be flagged and examined so you can handle it better in the future.
2. You can send an X-Abuse-Info header and a generic UA if you want capable sysadmins to be able to identify you but want to avoid being blocked by inexperienced webmasters.
3. Don’t ignore abuse reports.
4. Try to be consistent and ramp up slowly. It’s harder to cope with unnaturally-abrupt increases in traffic.
(2) Is a great idea I hadn't considered. A surprising number of sites require "browser" user-agents but otherwise have well-defined rate limits, robots.txt files, and everything you'd need to write a respectful crawler.
I'm not sure that (4) matters for larger sites? Their rate limits are usually a drop in the bucket compared to the background traffic.
#4 was more to avoid being noticed by someone like me before they’ve had their morning coffee. That being said, if anything does go wrong, and you’ve ramped up slowly, at least it gives autoscaling time to respond.
Generally, though, unless you screw up badly, submit forms, or blend in with a more problematic crawler, nobody’s going to care (or even notice).
Web pages (URLs) is not a DAG and hence it can have loops. Regardless, even if I've never designed a web crawler, I'd think a basic feature would be deduplication; a database (table) of URLs visited with a timestamp (so you can visit again after X days to check for changes, this refresh rate can be also included in the table per URL), so the crawler would check this table before visiting a URL.
Trust me, that's not the first thing you think about when designing your scraper.
Typically, one doesn't care whether the same page has been visited before. What one does care about is avoiding storing duplicate data.
> a basic feature would be deduplication
Which ones are actual attacks, and which are just poorly-made crawlers?
If it walks like an attack and it quacks like an attack...
I am lacking in sympathy for the perp here, as being careless like this has probably caused problems and possibly cost significant money for a lot of people.
However, this is also a compelling demonstration of why cloud services should be required to provide a hard price cap option for safety reasons. I've heard all the self-serving arguments they make about how turning things off surprisingly might be unwanted behaviour and so on. If that's the case, the admin won't set a cap. But there are exactly zero circumstances under which someone who intended to cap their usage at a level that would cost single digits of dollars or remain within a free plan intends or wants to run something that costs four orders of magnitude more than that, and IMNSHO such predatory pricing models should be illegal (assuming that the charges aren't already considered unenforceable by courts under such circumstances; I haven't checked).
It highly likely that this person caused a $72,000 bill on all the websites he has crawled. It's just that the cost is spread out over multiple websites so it is not noticeable.
I don't think so, serving a page from cache is far cheaper than requesting, crawling, and storing that page in a database. Cloud comes with a premium, too.
While that’s true, not everything can be cached, and many websites run expensive code to assign a session to each new “user.” Larger sites generally learn to avoid that or have the infrastructure to accommodate it, but even moderate-sized blogs and forums probably can’t cope with that scenario all too well.
Maybe it would work to put a marker argument (like the IP address as base64) in the URL when there might be snowballing traffic so you can see if it comes back at you. That could be used to send a page with all the links taken out, or just be rate limited.
Tricks like that don’t work with sites that are receiving a lot of traffic. Also, the exact solution you’ve described is a liability—IP addresses leak when people send each other links, and having unique URLs like that can cause issues with caching. Sure, we could store tokens in a database, but then you’ve just moved the bottleneck to the database.
We do have various ways to combat these issues; like any website of sufficient size, we have pretty complex methods of detecting problematic traffic and assessing the risk of any given request or session. However, no solution is perfect, and with the number of broken crawlers we see, some will inevitably cause problems.
To be clear, we can adjust our code and block them—that’s not an issue. The issue is that I have to wake up at 3 AM to do it, and even if it’s blocked, dealing with that traffic can be expensive. This guy got his $72k bill forgiven, but don’t expect the websites on the other end to be so lucky. (Yes, yes, ingress bandwidth is often free, but it’s never that simple. Scaling up? Bezos takes a cut. More database traffic? Pay the Bezos tax. Replication of enormous logs to other providers? Bezos hungry!)
Negligence is negligence. If you get in a car and drive recklessly without proper training, even if you didn’t intend to hurt anyone, you’re not going to get a lot of sympathy when you mow down a pedestrian. Likewise, I have little sympathy for people who face enormous bills for abusing powerful tools.
That’s not to say cloud providers don’t have billing problems. The delays are unacceptable, and the budgeting tools are often unintuitive or, as was likely the case here, outright inadequate. But in no universe was deploying code that spun up a container for every URL encountered a good idea.
Should such a mistake result in a $72k bill? Eh, probably not. I doubt this person will make the same mistake again, even with the bill forgiven. Or maybe they’ll just blame Google and attempt the same thing on AWS.
I would think you could obscure whatever marker you use fairly easily, any basic encryption should work. It mostly seems like you could do something that temporarily throttles crawlers to a limit that doesn't affect humans much so you don't have to do something manual in the middle of the night. Statistical outliers that get limited to one page request per second per IP or something like that.
The rest of this is arguing against something I'm not saying, which is fine, but thinking about a solution is not condoning the problem.
> I would think you could obscure whatever marker you use fairly easily, any basic encryption should work.
Indeed, you can, and there are situations in which it makes sense. However, it doesn’t really help when it comes to detecting abuse of this sort. For one, CGNAT causes problems. There’s also the issue of people linking to articles from sites like HN and Wayback Machine. Those two alone make it nearly impossible to automatically rate limit based on an ID in the URL.
CGNAT is a big issue that Western companies tend to neglect. However, it’s increasingly common in places like India, and it’s even seen at times in the US, especially in rural areas.
And, of course, public VPNs are growing in popularity.
Unfortunately, all of these factors mean that performing any sort of risk analysis or rate limiting on IP address alone tends to be ineffective or outright harmful for moderately large sites. You can do some fairly basic categorization (this is from a residential ISP, this is from a datacenter), but beyond that, it’s not particularly useful.
Hypothetically, let’s say:
1. We tag every URL with an IP address association in some way.
2. Someone posts a link on HN.
3. We see lots of requests with IP address tags that don’t match the actual requesting IP address, so we block or rate limit them.
4. We’ve just blocked traffic from HN.
Another hypothetical:
1. We design, calibrate, and test a rate limiting system in the US.
2. Some large percentage of real-world traffic comes from India and is behind CGNAT.
3. We’ve just rate-limited most of India.
4. So we exclude India.
5. But now we’ve rate-limited Nigeria, and malicious traffic from India isn’t blocked.
What we actually end up doing is similar but mostly relies cookies instead, and it’s only a single risk factor. It’s not perfect, and it has some caveats that the URL solution avoids, but it has far fewer false positives.
I have solved this kind of attacks with redis + app modification to count requests per ip per minute and auto add iptables rules to ban the offenders ip and deban it after xx minutes. Iptables rules are then synchronized to my fleet of front end servers.
I noticed Cloudflare is doing the same but 1 level deeper with XDP drop: https://blog.cloudflare.com/how-to-drop-10-million-packets/
This person used a new IP address for every single request, so that won’t work. And that’s a growing trend.
Yep I can come at someone with datacenter/residential/mobile/etc. IP addresses, all incredibly configurable to slip around network blocks. Luminati and Proxy Bonanza are the services I've used with the most success.
Getting source proxy lists of high-reputation networks is just $$ and a simple API integration game anymore.
There’s also the issue of CGNAT. If you rate limit too strictly based on IP address, you harm users who are stuck with CGNAT, especially in Asia and Africa. India is particularly problematic.
As for stuff like Luminati, if you’re being sufficiently sneaky, chances are you’re not going to snowball in the first place. I’m not sure why anyone would bother paying for Luminati to crawl sites like the one for which I work, but I have seen people use it to scam.
We can’t really be bothered to waste resources blocking well-behaved crawlers. Just keep it at a reasonable pace, respect errors (especially 429, but also 410 and 503), and ensure we have a way to contact you if things go wrong.
Yep - I have the HTTP error code detection dialed into an extreme because it's dumb to run a broken scrape anyway.
Frankly just any errors - if I see more than say 5-10 jobs fail within a 2-3 minute time period things are designed to wait X time, try again... and stop if they're still encountering errors and ping me to come in and investigate.
Faulty retry logic is just as dangerous as the forked/distributed run-off situation.
I'd assume they use 1000 distinct IPs because the system scaled to 1000 instances and presumably many of them came from related IPs. So it makes IP banning considerably harder, but not impossible.
More importantly, any reasonable IP-based approach would have a lot of false positives. What if there’s an RSS service like Feedly running on the same cloud?
Depending on your product, blacklisting all AWS IPs might be acceptable. For example my company has a VPN exit on AWS, which appears to be blacklisted by twitter.
This happened to me with Linkedin advertising, where my budget of a couple hundred dollars got re-charged to my card up to a couple thousand dollars without notifications. They handled all the complaints not in email, but their web based interface, and memory holed their commitment to refunding me the money after multiple back and forths.
This isn't a mistake, the design is their business model. While we don't have a specific formal definition and name for it in the category of dark patterns, I'd like to name it "scumbag billing," where we got scumbagged.
Or, if you cancel your Amazon account it doesn’t immediately stop all AWS billing.
It is — kid you not — recommended to terminate your individual services to avoid additional billing.
Technical limitation of a trillion dollar company? I say scum bag billing
What? How is that even legal ?
Login and click cancel account and read the language. I’m sure it’s because of some of their intertwined or niche services and not everything but is very real.
You can cancel your credit card, but I imagine big Amazon and Co shops/sells their collections to the best on paper deal debt collectors (aka, truly the scummiest and worst ones).
If you have ever been pursued for unpaid debt like this, despite consumer legal protections, it is years of Hell, legal threats/letters, calls, and other gray intimidation. All while wondering if your credit score will just be nuked over night.
Source: Victim of identity fraud
Can you afford better lawyers than Amazon? If not there's your answer...
Speculating but I wager they sell smaller collections in bulk to debt collectors at discount to try and collect vs running in house
Eventual consistency.
I remember broadband ISPs used to do that in the end of 90s, early 2000s with high price per MB of traffic, had to write off overages left and right (viruses on windows were a number 1 issue), then competition kicked in and "scumbag billing" disappeared, first they just got rid of overage charges and started merely throttling speed after caps and eventually got rid of caps altogether. Similar things were happening in hosting industry, colocation and dedicated servers in particular, competition eliminated overage charges too. Somehow "clouds" think they are too different, as if they don't compete with traditional hosting offerings, but being scambags trying to keep their high margins certainly pushes people away. I used to be an early AWS and Rackspace customer (remember when they were two top choices?), but haven't really used "clouds" since then, they are simply not competitive for literally anything, at least for someone like me, who's been doing infrastructure for decades.
Always use a virtual card for this. This way you can tell in advance how much can be drawn max and the expiration date. And even if their database is compromised it won't work elsewhere.
#1 reason why I have never, and never plan to (while it remains this way), use GCP/AWS etc while it would be on my personal bank card.
Instead I use DigitalOcean where you have droplet limits that you can set, and the ability the pre-pay if you pay by PayPal, and never enter my bank card.
If anyone from DO (or another provider) is reading this, any chance of pre-payment from bank cards? After reading enough of these articles, this could really swing a cloud provider choice for a small company. (Pre-paid gift vouchers would be cool as well, give someone $10 to spend for Christmas).
I use DigitalOcean and not GCP/AWS, personally and professionally, for the exact same reason: they provide a way to not give the company a "free for all" access to a bank acccount.
It's too bad that DO only allows pre-payment via PayPal. (I recently lost access to my PayPal account due to them having a wrong phone number - and I can't login to correct it, nor have any contact for support.) I'd love to remove PayPal as a dependency, and pay DO directly from a bank - but not give them permanent permission to withdraw from it.
It's a clear dark pattern that GCP/AWS does not provide a simple way to prevent such unexpectedly huge bills.
that’s a non-argument. you can do the same with an EC2 instance. you know exactly what it’s gonna cost. it’s this fancy services with “elastic” pricing models that usually get you
Sure, you know the price with renting one machine, and if what your doing is not a web app. But what about when you get way more network traffic than your app expected (I've seen HN submissions with exactly this)? And what if you built in some kind of scaling, automatically renting extra machines when you get traffic spikes? They have your card, you pay the $$$.
AWS services such as ECS Fargate let you specify an upper bound to the number of instances that can possibly be spawned.
There is even a default 'task limit' they enforce which we had to increase by sending an email request.
Typically though, as in the article above, it's the database service scaling that causes big shocks.
Rule of thumb would be to always ask if there is an upper bound to every cloud service one uses.
you can have auto scaling limits on ASGs on EC2. again, the technical solutions are there. it's a matter of learning how to use it properly
I don’t have any skin in the game here, but what if you don’t build auto-scaling, which keeps this comparison fair. How does pricing differ now?
The difference would be that for a prepaid service, you can build it in, knowing that once it eats through enough $, that's it; there is no more credit to take, power off the service. Whereas the billing by credit card, it will keep on going, and you can end up with these huge bills to pay.
But to answer your question (about AWS EC2 vs a DO droplet?), about other costs, you still have data transfer costs, which is currently:
AWS (for US East: Ohio):
Inbound: - first GB free - then $0.09/GB after (until 10TB, then you go to the next tier, paying a little less per GB.
Outbound - Well, I couldn't figure it out. The page was too complicated for me! (I think it's $0.01/GB? From this text: "Data transferred “in” to and “out” from public or Elastic IPv4 address is charged at $0.01/GB in each direction")
Source: https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer
DO: Inbound - free
Outbound Free tier: depends on which droplet and how long you keep the droplet powered on for, but for the cheapest $5/month powered on all month, you get 1TB free. After free tier: $0.01/GB
Source for Inbound: https://www.digitalocean.com/docs/billing/bandwidth/ Source for outbound calculator: https://www.digitalocean.com/pricing/bandwidth/
Anyone who understands it better than me (especially the AWS pricing), please feel free to comment, I'd genuinely be interested to understand it better; with the way it's documented, I don't really understand it very well.
Thanks for taking the time to dig in and reply! Yeah I was figuring it was networking where they would really get you.
The way AWS complicates their pricing to the point where it's hard to tell what you're on the hook for just comes across as so... shady to me. I understand what they offer, and which problems they solve, I just don't personally like doing business with entities like AWS.
No, I'm not building anything that really needs the scale of AWS, and yeah I guess that invalidates my opinion of it to a certain extent. I'm just a stranger throwing their voice into the void for fun and to learn new things :P
Inbound is free on AWS, outbound starts at $90/TB and goes down to $50/TB eventually.
Compare this to Hetzner where it costs EUR 1/TB and many server types include unlimited traffic at 1 Gbit/s.
I think the $0.01/GB you cited is region internal traffic which was sent through public IPs instead of private IPs.
yeah no. inbound is free on AWS.
There are other costs which can scale pretty high. The overpriced outgoing traffic is one. Certain APIs also charge per request or traffic. The S3 object storage is one example, databases like Firebase another. I believe most of the costs the OP incurred came from firebase and not the 1000 Cloud Run instances.
again. the comparison needs to be apples to apples.
it's true that outbound traffic does cost (and in theory if you do a lot of traffic), but the budget instances don't have that much bandwidth to start with. Also if you have a huge amount of traffic depending on what you do you might be better off with S3+cloudfront
Isn't outbound cloudfront traffic just as outrageously expensive as outbound EC2 traffic? My impression is that AWS is simply unaffordable when you have large amounts of traffic.
I tried to signup with a gift card and it was rejected.
I can attach a gift card to paypal and use it.
Personally, I would give my credit card to DO than needing to use PayPal. I avoid them. Always sad when stores only accept PayPal that's an instant missed sale.
Funny enough I do the same thing with PayPal as I do with DO, deposit money in there, don't tie it to my bank account. Any problems, at least my losses are limited.
My problem with AWS billing was I asked sales/support to call me so I could check with them what the cost would be to test one of their higher end GPU servers. They said I would just pay for the minutes used that it wouldnt add up to anything.
I tested the server for about 5 minutes and was charged a couple hundred dollars for "spinning up " the instance. Something the AWS sales guy assured me on the phone would not happen.
I still dont know why I didnt appeal I guess I know better than to try.
Next time don't ring, write and email. I've worked in financial sales. What's in the contract counts more than anything anyone ever says.
But is an email that says you only pay for the minutes counts as an contract?
It just creates a record. Doesn't have to be enforceable. but it means the person doing support becomes accountable for the things they've said. Chances are there's telephony recordings of your call. So you could escalate looking to hear those recordings. Emails' just easier in this regard.
Counts as legal paper trail at least.
I have seen similar stories with AWS. It’s somewhat shocking to me that there’s no way to ask to get cut off above some dollar limit. Is every customer risking unbounded liability?
Billing is traditionally not built into critical paths, but on async or even batch processing of logs from those systems. I doubt any system at Amazon really knows what you’ve spent until after the fact.
I think that’s true, but that’s a reflection of what people considered important when they built it. It’s not the only way to build things.
The liability is on Google's side, mostly. There are hard limits in terms of the number of instances you can create without deliberately asking to spend more money, and these hard limits are set based on what Google is willing to write off for an overnight mistake.
Dollar limits don't make sense for companies because you can't predict which parts of your infrastructure would get shutdown first. Hobbyists don't mind if everything is turned off but they would still want to keep the stored data. There would still be an opportunity to be overcharged on storage even with a dollar limit.
However, setting usage limits would be a solution for both companies and hobbyists. AWS could then calculate the maximum spending per month that is possible with the current settings. I bet they would never build such a calculator and the necessary usage limits because it makes it easier for customers to optimize costs.
Well, I suggested an option. That doesn’t mean companies have to choose it.
But to be honest, it’s somewhat surprising that companies are willing to take on the risk of unbounded financial liability if someone makes a mistake.
There are too many services that have continuous billing. Only way to hard stop is to start deleting compute and data.
Most AWS users would rather lose money than data and service for their customers, and bills are easier to negotiate than trying to recover your infrastructure.
The in-between approach is to create rate limits (either per sliding scale or total), which exists for some products but is probably too complicated to implement for everything.
Taking aws for example, the interface for the site is quite good overall with the suspicious exception of the billing pages, which are completely mysterious and unusable. What a weird coincidence, not a dark pattern at all
If I were in AWS/GCP's position, I would prefer to send alerts rather than turn off services.
Shutting off services can mean destroying the customer's data with no way for them to recover it. That could be from terminated ephemeral disks, or a terminated database, or cutting off a critical upload stream into their instances.
Its a lot easier to reduce/forgive a bill when a customer makes a mistake than to recover their lost data (or loss to their business).
Can’t you just stop spinning up new services and suspend running ones including connections and db accesses? Start with bandwidth?
Stopping DB accesses is a great way to mess up a lot of stuff. Even just stopping connections to the live website would be full of potential issues. Should it stop all access or still allow access to the administrative interface? What is the root cause of the billing overage is a report by finance or someone running a large job on EMR, should that pull the plug on the website?
What if the new services that are being suspended are writes to queuing systems that are used for order fulfillment or other business processes, should we drop these orders on the floor?
It's much easier to handle it post facto, and write off the expense on the cloud provider side, which doesn't cost them that much anyway. There are some guard rails that prevent people from doing catastrophic things that they can't write off (eg. taking all of the compute capacity of a region for hours on end, preventing other customers from actually using it) using limits that require manual intervention to be raised.
you can get billing alerts today. it’s not the lack of control mechanisms that prevented this failure. it was the noobism of the user
The over spend was caused by an application bug, not because the user was a "noob".
What about prepaid credit cards with payment limit? If the payment failed the service will be terminated? Or does aws continue and send an invoice anyway?
They still send the invoice. I accidentally left a couple of small files in an S3 bucket for years. Eventually my card expired on that account and they continued bothering me until I contacted support and we agreed to waive the bill.
I imagine they would have been more forceful if it was a larger bill.
Google once (falsely?) considered my Revolut debit card a pre-paid card and refused to accept it for GCS billing. The error message wasn't anything generic either - it stated specifically that pre-paid cards are not accepted.
Aren't Revolut's cards prepaid? I was under the impression it's not full, real bank account since they make it easy to transfer money in out. I imagine they're making money off the issuing bank part of the interchange fees
> What about prepaid credit cards with payment limit? If the payment failed the service will be terminated?
Never used one for this purpose but since billing happens after the fact (and monthly), AWS won't be aware of the limit until after the monthly billing occurs. They’ll just tell you the card failed and you have a billing liability to take care of (and give you some time to fix it) while still letting you rack up additional debt with services running after the billing fails; they definitely won't cut you off when you've reached the level that would meet the limit on your card (and couldn't even in theory without realtime notification of other charges against the card, even if they were inclined to.)
Someone should make a list of all the providers that support some kind of billing budget. It took me way more time than expected to find a CDN that has a budget limit for my personal project (I'm using Bunny CDN)
I've been looking at BunnyCDN and it looks pretty impressive. Is it as good as their landing pages look? Is it like a CDN+DNS so I can point my nameservers there and then configure the CNAMEs inside? (I'm fairly unfamiliar with DNS in general).
I've been pretty happy with Cloudflare, but at some point I added my credit card (silly me) and now I live scared of a DDOS costing me a lot of money.
I thought Cloudflare does not charge per Gb bandwidth but a fixed fee/month?
I've been super happy with BunnyCDN for my personal SaaS. Their UI is super nice, and it has been super easy to setup. Costs are really easy to predict, which is awesome for me.
DNS is in the works, but at the moment no, you need an external DNS with apex CNAME support (e.g. cloudflare)
Cloudflare has free bandwidth, including DDOS attacks.
From what I can tell, the affected project did have a GCP budget.
If you're talking about something pausing service, maybe call it a billing cap for clarity.
I can also recommend Bunny. Their support is also top notch.
Tarsnap
A month or two ago some product was advertised (ahem, upvoted) on Hacker News that offered a free tier. It sounded interesting and I wanted to try, but the website didn't allow to create an account for the free tier specifically. Instead it was like: create a general account, and if your usage remains below X it is free, and as soon as it goes above X you agree to pay. With no way to cap usage, i.e. no way to cap spending.
In other words, the only way to access the "free trial" is to give a blanket promise to pay unlimited amount of money if something goes wrong.
There is no way I would agree to that, so I just closed the browser tab and forgot about the whole thing. That is, until this debate reminded me of it.
For an online service, implementing the cap should be quite simple, so if it is not available, I am going to assume this is intentional.
> Google let go of our bill as a one-time gesture
How many times do they have to do that? Because if it is a high number, they would be operating at a loss.
In previous companies I've seen AWS and Azure do the same for >$10k bills from small startups. 1x$10,000 invoice is no where near as much as a lifetime of hosting a unicorn startup. Most cloud providers will even give you $50k to $100k in credits for 1 year if you are a startup from a good incubator/investor. It gives you incentives to not to care about how much you're spending. By the 1 year mark you're probably making some money, raising another series, and attempting to dramatically scale up your business so now you don't have the time to clean up the tech debt you've created to lower costs. They are absolutely making a bunch of money this way.
That 72K is probably not anywhere near how cheap it was for google to provide the service.
This was my thought as well.
What other kinds of businesses or services let you run up a bill of tens of thousands of dollars and then say, "Ok, you made a mistake, you can take it back"?
Any educated guesses on what this compute might actually cost Google?
I assume they're able to do this because the fixed costs have mostly been paid for already and the marginal cost of the electricity, system wear, and bandwidth are negligible, but I'm not sure.
It would cost them dollars. An IPhone costs < $100 to make. Obviously there's a development cost to it all. it's also easy to suggest that there's probably a feature missing atm :D
Sorry but your estimate is off by 2x-4x (depending on model). You can Google "iPhone bill of materials" for details.
It would be like writing off a stamp.
I had an € 1.200 bill from Google once for using their reverse geolocation API for a month. I complained and got a canned response saying something like "Fine, here's your money back but next time you're paying". It probably helped that I had been on the free tier up until then and complained that I never got a warning that I had surpassed the free tier amount of API calls.
I'm not using the service anymore,
Cloud providers somehow turned a commodity business into a high margin business. The costs are way lower than you think. The other factor is that keeping customers means more profit than throwing them out, even if they have made a mistake.