Ask HN: Why people still use GCP and AWS?
Cloudflare is so generous and provides almost everything— Workers, D1, CDN, KV, Durable Objects, Workers AI, S3 Compatible Storage (R2) & More! Because they don't offer the most basic service... virtual machines. Not everyone is making serverless JS web applications! Most of the time, people need a Linux server they can SSH into. Cloudflare don't offer this, so I continue to use AWS. It's definitely not just for JS web apps anymore—you can run Rust, Python, and even standard Docker containers now. Plus, things like D1(SQL) and R2(Storage) give you the entire backend stack ready-made.
But you're completely right that it doesn't replace a raw VM. Cloudflare's goal is to abstract away the infrastructure so you don't have to manage a Linux server just to host an API or SaaS. But if you actually need OS-level access, background daemons, or to run legacy code, you absolutely still need EC2 or a traditional VPS. You cannot run containers, you can have edge workers that use a container and spin up/down. They also don't have an OCI registry I think a critical limitation is that the database offering is designed for a very specific philosophy (which, honestly, the rest of the platform is too) and not suitable for general purpose use. The 10GB limit per DB is unsuitable for even trivial use-cases. I was going to use it for a new prototype (because I already use it for other stuff) and I realized the DB limitations could quickly become a blocker even for a prototype. If I understand correctly, the primary philosophy of the platform is edge computing with dedicated infra (workers, DB, etc) per-user. While that may be an under-leveraged niche and CloudFlare excels at it, it is still a niche. To clarify, there are two approaches you can take to handle large-scale databases on Cloudflare: With Durable Objects[0], you can create and orchestrate millions of sqlite databases that live directly on Cloudflare's edge machines. The 10GB limit applies to one database, but the idea is you design your system to split data into many small databases, e.g. one per user, or even one per document. Since the database is literally a local file on the machine hosting the Durable Object that talks to it, access is ridiculously fast. Scalability of any one database is limited, but you can create an unlimited number of them. If you really need a single big database, you can use Hyperdrive[1], which provides connection management and caching over plain old Postgres, MySQL, etc. Cloudflare itself doesn't host the database in this case but there are many database providers you can use it with. [0] https://developers.cloudflare.com/durable-objects/ [1] https://developers.cloudflare.com/hyperdrive/ (I'm the lead engineer on Cloudflare Workers.) I haven't used Cloudflare specifically for infrastructure but PAAS offerings like this have huge vendor lock-in which is a big risk. Not to mention if that feature you need isn’t supported by their abstraction you’re out of luck. And don't forget Tunnel!
It's a magical invention that lets us host websites while keeping our server firewalls closed! 100%. Tunnels are basically a cheat code. Being able to expose services securely to the internet without opening a single inbound port on the firewall feels like magic. Does Cloudflare have equivalent of GCP's Kubernetes Engine? I need VMs. I need NetApp Volumes that interact with my environment. I use Kubernetes. We have some serverless web app stuff, but I can make that work in Azure or AWS or GCP along side the non-severless. I have a shit ton of legacy stuff in AWS, and finding skilled AWS talent is easy I want diversity of providers. I can't figure it out why we are obsessed with AWS and GCP! Because other people have different needs to you that are better served by other platforms Get more experience under the belt and you’ll figure it out. Do they have a Kubernetes offering? Anything that requires an always on VM doesn't work. I'm not putting my app and postgres in different clouds.