Settings

Theme

Cloud VM benchmarks 2026

devblog.ecuadors.net

330 points by dkechag a day ago · 153 comments

Reader

zackify a day ago

I just ran some massive tests on our own CI. I use AMD Turin for this on gcp, which was noted as one of the fastest ones in the article.

The most insane part here is that the AMD EPYC 4565p can beat the turin's used on the cloud providers, by as much as 2x in the single core.

Our tests took 2 minutes on GCP, 1 minute flat on the 4565p with its boost to 5.1ghz holding steady vs only 4.1ghz on the gcp ones.

GCP charges $130 a month for 8vcpus. ALSO this is for SPOT that can be killed at any moment.

My 4565p is a $500 cpu... 32 vcpus... racked in a datacenter. The machine cost under 2k.

i am trying hard to convince more people to rack themselves especially for CI actions. The cloud provider charging $130 / mo for 3x less vcpus you break even in a couple months, it doesn't matter if it dies a few months later. On top of that you're getting full dedicated and 2x the perf. Anyways... glad to see I chose the right cpu type for gcloud even though nothing comes close to the cost / perf of self racking

  • AussieWog93 21 hours ago

    Hetzner charge between €10 and €48 for an 8vcpu setup, depending on how many other users you're happy to share with.

    For €104/mo you can get a 16-core Ryzen 9 7950X3D (basically identical to your 4565p) w/ 128GB DDR5, 2x2TB PCIE Gen4 SSD.

    That's not to say you're wrong about dedicated being much better value than VPS on a performance per dollar basis, but the markup that the European companies charge is much, much lower compared to what they'd charge in the US.

    In this instance you're looking at a ~17 month payback period even ignoring colo fees. Assuming a ~$100 colo fee that sibling comment suggested, you're looking at closer to 8 years.

    • Aurornis 20 hours ago

      Great points. If we’re going to talk about dedicated servers and long lock-in contracts, you have to look at the equivalent prices for hosted alternatives.

      It’s fun to start thinking about building your own server and putting in a rack, but there’s always a lot of tortured math to compare it to completely different cloud hosted solutions.

      One of the great things about cloud instances is that I can scale them up or down with the load without being locked into some hardware I purchased. For products I’ve worked on that have activity curves that follow day-night cycles or spike on holidays, this has been amazing. In some cases we could auto scale down at night and then auto scale back up during the day. As the user base grows we can easily switch to larger instances. We can also geographically distribute servers and provide lower latency.

      There is a long list of benefits that are omitted when people make arguments based solely on monthly cost numbers. If we’re going to talk about long term dedicated server contracts we should at least price against similar options from companies like Hetzner.

      • vladvasiliu 18 hours ago

        > One of the great things about cloud instances is that I can scale them up or down with the load without being locked into some hardware I purchased. For products I’ve worked on that have activity curves that follow day-night cycles or spike on holidays, this has been amazing. In some cases we could auto scale down at night and then auto scale back up during the day.

        At work we have this day / night cycle. But for some reason we're married to AWS. If we provisioned 24/7/365 a bunch of servers at Hetzner or such to cover the peaks with some margin, it would still be cheaper by a notable margin. Sure, 90% of them would twiddle their thumbs from 22 PM to 10 AM. So what?

        Sure, if your clients are completely unpredictable and you'll see x100 traffic without notice, the cloud is great.

        But how many companies are actually in that kind of situation? Looking back over a year or two, we're quite reliably able to predict when we'll have more visitors and how many more compared to baseline. We could just adjust the headroom to be able to take in those spikes. And I suppose if you want to save the environment, you could just turn off the Hetzner servers while they sit unused.

        • maccard 16 hours ago

          I’ve ran MP game servers that follow this pattern. A good rule of thumb is you should cover 75% of your peak load with your cheaper steady state pre allocated machines, and burst for the last 25%. It really is that expensive to do.

      • ragall 10 hours ago

        If you can reasonably estimate your usage and the peak total usage is less than ~5x the minimum, it still makes sense to just rent hardware at Hetzner.

        You even have the possibility of managed racks, whereby you rent one or more racks, but the servers are still provided by Hetzner so you don't have to handle procurement, logistics or replacements.

    • rozenmd 10 hours ago

      I'd be terrified to run anything other than a classic web server on Hetzner, have heard too many stories of them arbitrarily terminating workloads they didn't understand.

  • Aurornis a day ago

    > My 4565p is a $500 cpu... 32 vcpus... racked in a datacenter. The machine cost under 2k.

    > The cloud provider charging $140 / mo for 3x less vcpus you break even in a couple months, it doesn't matter if it dies a few months later

    How do you calculate break even in a couple months if the machine costs $2,000 and you still have to pay colo fees?

    If your colo fees were $100 month you wouldn’t break even for over 4 years. You could try to find cheaper colocation but even with free colocation your example doesn’t break even for over a year.

    • zackify a day ago

      the 140/mo is for 3x less vcpu, so $420/mo savings if you use all those same cores. sorry for the poor comparison wording there. in a few months already up to $1300+ by 6 months already paid the machine.

      colo fees are cheap if you need more than just 1u. even with a 50-100 fee you easily get way more performance and come ahead within a year

      • Aurornis a day ago

        > by 6 months already paid the machine.

        You originally said “a couple months” but now it’s 6 months and assumption of $0 collocation fees which isn’t realistic

        In my experience situations rarely call for precisely 32 cores for a fixed period of 3 years to support calculations like this anyway. We start with a small set of cloud servers and scale them up as traffic grows. Today’s tooling makes it easy to auto scale throughout the day, even.

        When trying to rack a server everyone aims higher because it sucks to start running into limits unexpectedly and be stuck on a server that wasn’t big enough to handle the load. Then you have to start considering having at least two servers in case one starts failing.

        Racking a single self-built server is great for hobby projects but it’s always more complicated for serving real business workloads.

        • edoceo a day ago

          Don't nit-pick the "couple". It was used casually - like to mean not terribly long time. So the 2-6 spread, while technically big, is still just a trifle. While I'm nit-picking; up thread is talking about a limited box for CI and you're talking about scaling up real business workloads. That's just like the difference between 2 and 6. Give it a rest.

          Everyone: run your scenarios and expectations in a spreadsheet and then use real data to run your CBA. Your case will be unique(ish) so make your case for your situation.

          • Aurornis 20 hours ago

            > So the 2-6 spread, while technically big, is still just a trifle.

            I think you’re misreading. Even the 6 month thread was based on invalid assumptions of $0 collocation fees. Add in even cheap collocation fees and it’s pushed out even further

            That’s not really a nit pick when the claims were based on impossible math. It’s more of a Motte and Bailey where they come in with a “couple of months” claim that sounds awesome on the surface but then falls back to a completely different number if anyone looks at the details.

            • cyberpunk 18 hours ago

              It’s even dumber than that.

              Let’s not forget that if even three engineers are working on this migration for only a week your cost is now 10’s of thousands for this couple hundred euros cost saving.

              (assuming avg all-in engineer costs in europe)

              It makes no sense to optimise cost for infrastructure mostly, it does make sense to make it faster, since almost all your spend is on engineers.

              Spending thousands to save hundreds is not a healthy business.

          • zackify a day ago

            yeah thanks for that i was just meaning a very fast return

        • jjmarr 21 hours ago

          You can take a hybrid approach and use the rack for base capacity, cloud for scaling.

    • Imustaskforhelp 16 hours ago

      minor point to say but I have seen in some locations colocation costs to be around $30-40 as well. $100 is usually reserved for say colocating within Hetzner for example (iirc)

      Just as a rule of thumb, if your servers cost more than 1k$ per month or even 500$ maybe even, depending upon if you are okay with colocation and everything. I have found it to break even (more than even the cheapest say hetzner or similar actually so for GCP or anything which charge significantly more, maybe you should warrant a deeper analysis on what is better colocation or dedicated servers or for short burstable units maybe even vps)

  • oDot a day ago

    I used to run a site that compares prices[0]. Not only is the ecosystem pull to the cloud strong, but many developers today look at bare metal as downright daunting.

    Not sure where that fear comes from. Cloud challenges can be as or more complex than bare metal ones.

    [0]: https://baremetalsavings.com/

    • hamandcheese a day ago

      > Cloud challenges can be as or more complex than bare metal ones.

      Big +1 to this. For what I thought was a modest sized project it feels like an np-hard problem coordinating with gcloud account reps to figure out what regions have both enough hyperdisk capacity and compute capacity. A far cry from being able to just "download more ram" with ease.

      The cloud ain't magic folks, it's just someone else's servers.

      (All that said... still way easier than if I needed to procure our own hardware and colocate it. The project is complete. Just delayed more than I expected.)

      • hvb2 9 hours ago

        > The cloud ain't magic folks, it's just someone else's servers.

        The cloud is where the entire responsibility for those servers lives elsewhere.

        If you're going to run a VM, sure. But when you're running a managed db with some managed compute, the cost for that might be high in comparison. But you just offloaded the whole infra management responsibility. That's their value add

        • stephenr 3 hours ago

          But any serious deployment of "cloud" infrastructure still needs management, you're just forcing the people doing it to use the small number of knobs the cloud provider makes available rather than giving them full access to the software itself.

    • jbverschoor 19 hours ago

      Partitioning a server! Omg lol

      It’s funny, bc AWS did not start this tour of business. What they did do is make it possible to pay by the hour. The ephemeral spare compute is what they started.

      Yet almost nobody understood the ephemeral part.

      You might even be better off running a macmini at home fiber, especially for backend processing

    • satvikpendem a day ago

      > Not sure where that fear comes from.

      Probably because most developers these days have not known a world without using cloud providers, with AWS being 20 years old now.

      • jmathai a day ago

        Racking your own hardware doesn’t get you web UIs and APIs out of the box. At least it didn’t 2 decades ago.

        • satvikpendem a day ago

          Sure, now it does however (via the many OSS PaaS) so the calculus must also therefore change.

          • spockz 17 hours ago

            Which OSS PaaS are there that are noteworthy? Or do you mean something like Kubernetes?

            • Imustaskforhelp 16 hours ago

              Coolify is usually loved by the community.

              Dokploy is another good one.

              Kubero seems nice for more kubernetes oriented tasks.

              But I feel like if someone is having a single piece of hardware as the OP did. Kubernetes might not be of as much help and Coolify/Dokploy are so much simpler in that regards.

              • spockz 9 hours ago

                Thanks. I will look into those.

                I suppose kubernetes with the right operators installed and the right node labels applied could almost work as a self service control plane. But then VMs have to run in kubevirt. There is crossplane but that needs another IaaS to do its thing.

    • keepamovin a day ago

      The fragmentation and friction! Comparing prices usually requires 10 open browser tabs and a spreadsheet, which is what keeps people locked into their default cloud. I built a tool to solve this called BlueDot (ie, Earth, where all the clouds are)[0]. It’s a TUI that aggregates 58,000+ server configurations across 6 clouds (including Hetzner). It lets you view side-by-side price comparisons and deploy instantly from the terminal. It makes grabbing a cheap Hetzner box just as easy as spinning up something on AWS/GCP.

      [0]: https://tui.bluedot.ink

      • Imustaskforhelp 16 hours ago

        I use serververify which is created by jbiloh from the lowendtalk forum which uses yabs (yet-another-benchmark-script) to give details about lot more things than usually meets the eye.

        That being said, I have found getdeploying.com to be a decent starting point as well if you aren't too well versed within the Lowend providers who are quite diverse and that comes at both costs and profits.

        Btw legendary https://vpspricetracker.com (which was one of the first websites that I personally had opened to find vps prices when I was starting out or was curious) is also created by jbiloh.

        So these few websites + casually scrolling LET is enough for me to nowadays find the winner with infinitely more customizability but I understand the point of TUI but actually the whole hosting industry has always revolved around websites even from the start. So they are less interested in making TUI's for such projects generally speaking atleast that's my opinion

  • icelancer a day ago

    Self-racking lets you rack a bunch of gear you'd never find in VM/dedicated rentals, like consumer parts or older, still very good parts. Overclocking options are available as well if you DIY.

    If you need single-threaded performance, colo is really the only way to go anyway.

    We have two full racks and we're super happy with them.

    • Melatonic a day ago

      Or under clocking and under volting for even better performance to price/power/longevity ratios

      • sroussey a day ago

        For a single rack, you really don’t have too many choices for power. You make a choice to provision and pay, I never had anyone check how much of that I used and give me money back. Maybe things have changed though.

      • icelancer a day ago

        No doubt. Especially for GPU inference at scale. We overclock/overvolt for training and tune way down for inference.

  • vmg12 a day ago

    You can go on OVH and get a dedicated server with 384 threads and a Turin cpu for $1147 a month. You have to pay $1147 for installation and the default has low ram and network speeds but even after upgrading those it's going to be 1/5 of what it would cost on public clouds.

  • tempay a day ago

    This is basically the premise of https://www.blacksmith.sh/ as far as I know, though without the need to host the hardware yourself and the potential complexity they comes with that.

    • sroussey a day ago

      I did have some MySQL servers racked for over a decade and I was afraid to restart the machines. And yes as new versions of MySQL came out I did have to compile them myself.

      Similar lower specced machines that were closer to the public internet had boot disk failures, but I had a few of them, so it wasn’t an issue. Spinning metal and all.

      One of the db servers dying would have required a next day colo visit… so I never rebooted.

  • ahartmetz a day ago

    "VCPUS" are a bit of scam in my experience. You usually don't get what the hardware (according to /proc/cpuinfo) is capable of.

    • tryauuum 17 hours ago

      Just want to say something in defence of cloud providers

      - sometimes you need to limit the list of available CPU features to allow live migration between different hypervisors

      - even if you migrate the virtual machine to the latest state of the art CPU, /proc/cpuinfo won't reflect it (linux would go crazy if you tried to switch the CPU information on the fly) (the frequency boost would be measurable though, just not via /proc/cpuinfo )

  • bob1029 19 hours ago

    > i am trying hard to convince more people to rack themselves especially for CI actions.

    What do you think the typical duty cycle is for a CI machine?

    Raw performance is kind of meaningless if you aren't actually using the hardware. It's a lot of up front capex just to prove a point on a single metric.

    • spockz 17 hours ago

      Raw performance, in the sense of single core performance, is still one of the most important factors for us. Yes, we have parallelised tests, in different modules, etc. But there are still many single threaded operations in the build server. Also, especially in the cloud, IO is a bottleneck you can almost only get around by provisioning a bigger CPU.

      Our CI run smaller PR checks during the day when devs make changes. In the “downtime” we run longer/more complex tests such as resilience and performance tests. So typically a machine is utilised 20-22/7.

  • dkechagOP a day ago

    A 16-core 4565p is of course faster in max single thread speed than a 96-core that GCP is running at an economically optimal base clock.

    A year ago I gave a talk about optimizing Cloud cost efficiency and I did a comparison of colocation vs cloud over time. You might find it interesting here, linking to the relative part: https://youtu.be/UEjMr5aUbbM?si=4QFSXKTBFJa2WrRm&t=1236

    TLDR, colocation broke even in 6 to 18 months for on-demand and 3y reserve cloud respectively. But spot instances can actually be quite cheaper than colocation.

    You generally don't go to the cloud for the price (except if we are talking hetzner etc).

  • darkwater 20 hours ago

    Yeah I expected this benchmark to include hosted "metal" hardware with the "per instruction cost" benchmark to see how provider like Hetnzer fare against classic AWS VMs. It's a bit apple to oranges I know, but I think nowadays is what most people compared pure performance cost are interested in. I'm not going to migrate from AWS VMs to GCP or Hetzner VMs, but I might be open to Hetzner hosted servers instead for a massive enough cost reduction.

    • justinclift 19 hours ago

      > ... but I might be open to Hetzner hosted servers instead for a massive enough cost reduction.

      Don't use Hetzner for anything actually important to you. :(

      As to why: https://news.ycombinator.com/item?id=45481328

      • gizzlon 6 hours ago

        To be honest, I find it hard to believe this is common. They have been around for ages and are quite beloved by many. Maybe something went wrong in this case?

        Guess I will find out, think my cc expires soon.

        Also, you can pay by bank transfer, at least for dedicated.

        • justinclift 3 hours ago

          > To be honest, I find it hard to believe this is common.

          I agree. But it still happened, with literally no warning (I actually checked), and their support staff refused to even call me to get updated card details when I was in the middle of an actual cyclone. ie phone service worked, internet didn't

          Directly impacting our customers, who were extremely unhappy (to say the least).

          "Fuck Hetzner!" is not nearly strong enough to convey the sentiment.

          • stephenr 2 hours ago

            I mean, the context here is that a company stopped providing services after a bank cancelled a credit card they had been charging.

            For all they know, your legitimate charges were the fraudulent charges that triggered the cancellation.

            I cannot fathom why you keep using the term "expired" when that is a very different scenario to "cancelled by the issuing bank".

      • nine_k 9 hours ago

        A good business would send you a warning a month before your credit card expires, not after the fact.

        • stephenr 2 hours ago

          For some reason parent is using the word "expired" when they really mean "cancelled by the issuing bank".

  • alberth 21 hours ago

    Both Datapacket & OVH have the 4565p.

    This proc is a hidden gem.

    For most workloads it’s not just the most performant, but also the best bang-for-buck.

  • api a day ago

    Big cloud is ludicrously expensive. It’s truly amazing. Bandwidth is even worse. It’s like a 10000X markup.

    • sroussey a day ago

      It’s wild that no one knows just how cheap bandwidth really is. AWS pulled one over on people and it’s like the movie studios still demanding 10% of the top for VHS distribution. Today.

      • jbverschoor 19 hours ago

        That’s with every industry

        Make things look like a complicated black box. Make sure it feels scary to roll your own. Hide the core technical skills behind abstracted skills

        • api 13 hours ago

          Cloud has done a truly epically awesome job at this. People are now afraid to set stuff up.

preserves a day ago

Disclosure: I work on VMs at Google Compute Engine :)

This was a really, really good write-up. I appreciated the breadth of VMs tested and the spread of benchmarks. A few random observations:

1. Turin is a beast.

2. The data on price-performance makes Hetzner look really fantastic, especially for small scale projects where region placement doesn’t matter much and big bursty scaling isn’t required.

3. I think the first ever cloud VM I ever provisioned was on DigitalOcean. I was surprised at how old their fleet was, but I guess they have some limited Emerald Rapids offerings now: https://www.digitalocean.com/blog/introducing-5th-gen-xeon-p...

  • dkechagOP 17 hours ago

    Huh, I have not been able to provision any newer CPUs after dozens of tests, certainly not Emerald Rapids. And that blog post is weird, their charts don't even have a key shown, it's like they bought a few CPUs and threw that quickly together to get people's hopes up. A real shame, I am still running DO droplets, but they are behind the times...

  • tehlike a day ago

    Hetzner really shines with their dedicated stuff.

    • winrid a day ago

      Yeah, a $45 hetzner box would probably be at the top of all these charts, but it's a little more work to provision.

    • allset_ 21 hours ago

      If only they offered dedicated in the US.

      • joshmn 21 hours ago

        There are plenty of other dedicated server providers that do.

        • nekitamo 20 hours ago

          Which comparable US dedicated server providers do you prefer?

          • embedding-shape 14 hours ago

            I tend to mostly use dedicated servers from Hetzner for my own projects and for my client's projects. Whenever they explicitly want US servers, I tend to go with Vultr's dedicated servers which been serving us well for many years.

          • veeti 20 hours ago

            OVH has dedicated in USA and Canada

            • scorpioxy 19 hours ago

              I've read several reports from customers saying that their customer service is really bad. Difficult to know with online reviews of course. Does anyone have positive stories to share? I am looking at Australian hosts specifically and Hetzner doesn't have any data centers here.

              • cantalopes 4 hours ago

                Using them for production for years, never dissapointed.

                What you should be aware of is their new exploration of s3 storage. I mean, the s3 works and everything but it's still too eaely - the servers are kind of slow and sometimes fail to upload/download. They are still tuning out the storage architecture. The api key management is kind of too primitive (although much more headache free than configuring aws), and the online file browser is lacking

                But for vps servers - they are battletested veterans

              • cb22 17 hours ago

                We use them heavily for test boxes and running experiments. Standard off-the-shelf machines are provisioned almost instantly, and never had any problems.

                More custom stuff (eg 100Gb/s NICs) takes a bit longer, but they've always been super responsive and quick to sort out any issues!

                The price / performance you get from something like their AX162 is just crazy, although unfortunately with the whole RAM / NVMe shortage the setup fee has gone up quite a lot.

nixgeek a day ago

Genoa was a big leap from Milan. Turin is a huge leap again. AMD really is doing spectacularly well at the moment. Kudos to Lisa Su and the team.

  • jiggawatts a day ago

    > Kudos to Lisa Su and the team.

    They're a typical hardware maker unable to focus on software, which is why NVIDIA is now a multi-trillion dollar corporation and AMD is "just" a few hundred billion.

    They've focused too much on CPUs and completely dropped the ball on AI and compute accelerators.

    It's especially sad considering that the MI300 and related accelerators on paper are competitive with NVIDIA hardware, it's just that they have nowhere near the same software stack, so nobody cares.

    • dijit a day ago

      Don’t really care.

      We were stuck with Intel, its nice that we have better CPUs.

      • a012 a day ago

        Yeah, remember when 4 core 8 threads were the high-end CPUs until AMD Ryzen came out? If AMD didn't do their best job then we're still stuck with the norm of 4 cores for more years as I can imagine.

    • theandrewbailey 15 hours ago

      > completely dropped the ball on AI and compute accelerators

      AMD produces AI chips, and they seem to be doing quite well.[0] If they didn't, AMD wouldn't be worth anywhere near what it is.

      [0] https://openai.com/index/openai-amd-strategic-partnership/

    • LKummer 6 hours ago

      Nvidia datacenter GPUs have awful software. If they focus on it they're not doing a good job.

    • close04 19 hours ago

      AMD has to fight both Intel and Nvidia in the market. It chose to take on Intel and clearly it was a wise decision. You can’t win if you fight every battle at once against much stronger opponents.

      And don’t get me started on the valuation of companies riding the AI bubble.

boberoni a day ago

Anyone have experience with Oracle Cloud and ease of moving away?

This benchmark seems to recommend Oracle Cloud, but I’ve heard that Oracle has historically used aggressive licenses and legal terms to keep customers locked-in.

  • dkechagOP a day ago

    I wrote the article. I would NEVER tell anyone to use Oracle, as the vendor lock in and strong-arming and pricing is ridiculous. That said, I am hosting small projects on Oracle Cloud, due to the super low cost. I can just move them whenever they decide to be naughty, I am not using an oracle DB or anything proprietary, just linux VMs with my own mysql setup.

    • olalonde 20 hours ago

      I was confused by your comment. I assume you meant to say "I would NEVER tell anyone to use Oracle DB" (as opposed to Oracle Cloud)?

  • 12345ieee a day ago

    I've been using it professionally for years.

    As long as you don't use their exclusive DBaaS, moving away is easier than from other places, as egress traffic is free.

    The user experience though, stuff of nightmares...

  • Aurornis a day ago

    I signed up for an Oracle Cloud trial. They closed my trial a few days later and shut down my one trial VM without warning.

    Weirdly they didn’t allow me to add payment info to continue. Even weirder their sales people kept contacting me asking me to come back. When I explained the situation they all tried to fix it and then went radio silent until the next sales rep came along to try to convince me to stay.

    I searched Reddit at the time and a lot of other people had the same experience. A lot of other people were bragging about abusing their free tier and trials without consequences. I still don’t know how they decided to permanently close my account (without informing the sales team)

    • dkechagOP 11 hours ago

      I had a bit of a hard time when I first signed up. It turns out the problem was I had included "oracle" in my email (I give a different email on my domain to every provider), so some part of the system considered it an internal email and others part did not, so I had a weird account that could not do anything. Took them a month of two to figure out what was wrong...

  • XCSme a day ago

    I used their free tier.

    The account creation process was really confusing, and they kept turning off my instance because usage was not high enough.

    It seemed quited oudated/confusing to use last time I tried it a few years ago.

    • chrismorgan 15 hours ago

      This has made me leery of Oracle Cloud in the past. https://docs.oracle.com/en-us/iaas/Content/FreeTier/freetier...:

      > Idle Always Free compute instances may be reclaimed by Oracle. Oracle will deem virtual machine and bare metal compute instances as idle if, during a 7-day period, the following are true:

      > • CPU utilization for the 95th percentile is less than 20%

      > • Network utilization is less than 20%

      > • Memory utilization is less than 20% (applies to A1 shapes only)

      The stupid but presumably effective solution is to waste resources to keep above those limits.

      Another solution is offered by the email multiple sources cite they send when they reclaim (or warn they will reclaim? not clear) an instance:

      > You can keep idle compute instances from being stopped by converting your account to Pay As You Go (PAYG). With PAYG, you will not be charged as long as your usage for all OCI resources remains within the Always Free limits.

      • dkechagOP 11 hours ago

        They warn they will reclaim. I had two accounts, one processing data for a free weather service I volunteer for, so it's not idle and has had no issues for a couple of years now. The other for personal projects, so at times it stays idle for a while and I would get these email warnings which made me switched it to paid. I have not paid a cent, again for a couple of years now.

  • syncordie 19 hours ago

    We moved about 400 VMs across AWS and GCP to Oracle Cloud spead across 4 regions. Our approach was to first move our applications to rely only on Load Balancers, Block/Object Storage and Compute from the cloud. No RDS, BigQuery etc. This required a slight increase in SREs/DevOps resources.

    Once we did this, the move was fairly easy (the exception being having to write our own auto-scaling logic as the built-in one is very limited). Overall we reduced our cloud spend (even accounting for the additional staff) by about 40%. Bandwidth is practically free and you are not limited to specific combos of CPU/RAM (so you can easily provision something with 7 cores and 9 GB RAM). Another big factor for us was that compute costs in OCI don't vary by region.

    I will not recommend using any other managed services from OCI besides the basics (we tried some and they are not very reliable). We've seen minor issues in Networking periodically (Private DNS, LBs or interconnectivity between compute instances), but overall I would say the switch has been worth it.

  • acedTrex a day ago

    Everytime i have to login to oracle cloud i strongly wish i was not logging into oracle cloud...

  • sqircles a day ago

    They'll find one way or another to laugh all the way to the bank, that is for sure.

e-master 14 hours ago

> The Azure pricing is at least as complex as AWS/GCP, plus the pricing tool seems worse. They also lag behind the other two major providers in CPU releases - Turin and Granite Rapids are still in closed preview at the time of writing this.

Small nitpick, but Turin-based VM's have been available for more than a month now [1], and they are a beast.

[1] - https://techcommunity.microsoft.com/blog/azurecompute/announ...

  • dkechagOP 12 hours ago

    The testing was from mid-September to mid-January, Turin was released on Azure end of January. They simply did not make the cut, unfortunately I could not wait forever for them, they always seem to lag behind at least half a year :(

    I might do an addendum just for Azure...

PaulKeeble a day ago

A few comparisons just from a gaming PC with a 9800X3D (8C16T 5.2 Ghz Boost).

7 Zip benchmark

9800X3D 130 GIPs compression, 134 GIPs decompress.

C8A 21577 MIPs (21.5GIPs) compression, 9868 MIPS decompression (9.9GIPs).

Geekbench 5

9800X3D 16975 multithread, 2474 single thread

C8A 4049 multithread, 2240 single thread

A desktop class CPU is definitely quicker single threaded and multithreaded, no surprises there most of these are dual core. The single threaded performance of the C8A is actually pretty good but its also the best of the bunch by a wide margin most of the CPUs are far behind. Memory performance appears to be attrocious all around.

  • dijit 19 hours ago

    Thanks for the benchmarks!

    I’d only add that its very common for gaming computers to be screaming fast- moreso than workstation machines or servers which are a bit more conservative with performance and emphasise correctness (slower cores, slower ram, more ECC). Its not a lot, but it can feel annoying when you sit on a company issued workstation that cost €10,000 but get worse performance than a €2,500 gaming computer.

deaux 21 hours ago

Vultr and HostHatch are also worth considering.

  • andai 7 hours ago

    Vultr topped the benchies a few years ago. Not sure how they fare now, would be good to see some more recent benchmarks. Was sad to see them missing from this one.

  • npn 17 hours ago

    Don't use Vultr. I still have 60 dollars credit there that I can never spend, because they will limit your account if you do not use it for sometime.

    And the pricing is laughable expensive comparr to OVH.

  • esseph 17 hours ago

    Vultr is great, have been using them for many years.

chrisweekly 12 hours ago

YIL (Yesterday I Learned) about Blue Dot

https://tui.bluedot.ink/

a CLI for managing cloud resources that lets you compare prices, apples:apples, AFAICT. I haven't tried it yet but its featureset looks pretty great.

s_dev 17 hours ago

Would have been nice to see Scaleway included, I recently migrated from Digital Ocean to them and found them to be very similar in pricing and performance.

pothamk 15 hours ago

VM benchmarks are tricky because you often end up benchmarking the hypervisor as much as the VM. CPU pinning, storage backend, network virtualization, and noisy neighbours can swing results a lot. I’ve seen identical workloads differ ~20–30% just from those factors.

rr808 a day ago

Still having your own hardware seems so much cheaper. Maybe even just for dev/uat environments?

Every big corporate I have worked at has lower cost of capital than Amazon, and yet they want to move to AWS. I just dont understand it.

  • andai 7 hours ago

    I think it's incentives. Working with AWS is good for your resume. It's also a responsibility thing. If AWS is down then it's Amazon's fault. If there's a problem with your physical server, then it's your problem.

    A broader thing here is -- and you may also notice this trend in software -- employees are incentivized towards complex solutions, while business owners are incentivized towards simple solutions.

    (Shiny object syndrome sold separately ;)

  • ibejoeb a day ago

    I've moved two clients to colo. Dramatic cost savings. So many systems only use VMs and a few basic cloud features. Everyone knows this, but just to make the point, you can still use certain cloud products (cloud storage for example) just fine while running your primary workloads on your own hardware. Sometimes it makes perfect sense, and you just need someone to nudge you and tell you it's going to be ok.

  • hrmtst93837 19 hours ago

    I think lower cost of capital is a narrow metric and rarely reflects the real total cost of ownership once you account for ops headcount, provisioning lag, redundancy requirements, patching, and developer time.

    Cloud looks expensive on sticker price, but it buys instant provisioning, autoscaling, managed databases and multi-region DR, and those benefits only pay off if you actually exploit autoscaling, reserved or savings plans, spot fleets and cost tooling like Kubecost or AWS Compute Optimizer to enforce right-sizing and kill zombie instances.

    If you want cheap dev and UAT keep them on on-prem metal or cheap colo, but automate with Terraform and run reproducible runtimes like k3s or devcontainers so environments stay consistent and you do not trade lower capex for a creeping operations nightmare.

  • dkechagOP a day ago

    Going to the cloud can't possibly be as cheap as owning your own hardware for obvious reasons - they have to make money somehow. Well, unless you use spot instances, which uses spare nodes. In any case, you move to the cloud despite the cost if you need the multi-region redundancy, the management/features etc. More commonly it's because the higher ups heard everybody's doing it, but oh well :D

  • PaulKeeble a day ago

    The main thrust of the economic argument has been on the cost of system adminstrators that maintain the hardware. Electricity and cooling being big ongoing costs, but also when AWS released it wasn't uncommon to order a server and have it take 3 months to arrive.

    I think in practice the system administrators are still in the company now as AWS engineers, they still keep all that platform stuff running and your paying AWS for their engineers too as well as electricity. It has the advantage of being very quick to spin up another box, but also machines these days can come with 288 cores, its not a big stretch to maintain sufficient surplass and the tools to allow teams to self service.

    Things are in a different place to when AWS first released, AWS ought to be charging a lot less for the compute, memory and storage, their business is wildly profitable at current rates because per core machines got cheaper.

  • dabinat 19 hours ago

    I think it can make sense for user-facing services. I host my web and database servers with AWS because unmanaged DBs can be a PITA, Amazon is peered with basically everyone, AWS is way more generous with network speeds than many dedicated / colo providers, and it’s easy to scale capacity up and down. Backend servers are hosted with cheaper providers though.

  • doener a day ago

    There are so many articles like these:

    "We Moved from AWS to Hetzner. Cut Costs 89%. Here’s the Catch."

    https://medium.com/lets-code-future/we-moved-from-aws-to-het...

    • selectively a day ago

      You just linked AI slop.

      • doener a day ago

        You mean the image? The text does not sound like AI at all IMHO.

        • dkechagOP a day ago

          > No theory. No fluff. Just production.

          ChatGPT tells me "no theory, no fluff" all the time :D

          • BeastMachine a day ago

            Where do you think it learned that phrasing from?

            • deaux 21 hours ago

              Not from people using that same phrasing twice within a few sentences.

              """ No warning. No traffic spike. Just… more money gone.

              That’s when I finally looked at Hetzner.

              I’ve seen too many backend systems fail for the same reasons — and too many teams learn the hard way.

              So I turned those incidents into a practical field manual: real failures, root causes, fixes, and prevention systems.

              No theory. No fluff. Just production. """

              It's clearly slop, they immediately use effectively the same one again:

              """ That last line isn’t a joke. There were charges I genuinely couldn’t explain. Elastic IPs we forgot to release. Snapshots from instances that no longer existed. CloudFront distributions someone set up for testing. """

              No, human writers don't repeat this pattern every single paragraph. They use it at most across in a whole article.

              • BeastMachine 21 hours ago

                Repetition is a very common tool in writing (ie 'I have a dream').

                I'm just irked that it's being called out for AI slop because "I feel it in my bones!!"

                There's a good chance it was written using AI -- should that matter? If the content is wrong/sucks, say that instead. If you're going to dismiss all AI assisted writing: good luck in the next decade.

                • jurgenburgen 18 hours ago

                  Reading the same (very annoying marketing blog) style of writing gets old fast.

                  It’s like suddenly all memes are just the same meme and nobody makes their own memes because “AI does it better”.

                  The style of writing is an intrinsic part of communication, if you can’t critique that then what is content? We’re not machines sharing pieces of data with each other.

                • johndough 17 hours ago

                  AI-written text is not necessarily incorrect, but if the author did not take their time to remove the AI slop, they probably did not put much effort into it elsewhere. In addition, the text is often over two times longer than without the slop, which disrespects the reader's time (even worse in this case, since a significant fraction of the article is an ad for the author's books).

  • NERD_ALERT a day ago

    > I just don’t understand it

    Maintaining and updating your own hardware comes with so much operational overhead compared to magically spinning up and down resources as needed. I don’t think this really needs to be said.

    • sroussey a day ago

      I dunno… for setup, yes absolutely. One time cost in time. After that, not really.

      • NERD_ALERT 13 hours ago

        It’s absolutely not a one time cost. Once you have it you need to hire people full time to maintain it and eventually upgrade it. Hardware fails constantly

        • sroussey 5 hours ago

          I've done this for decades with a full rack. Stuff fails on occasion. So what?

  • p_ing a day ago

    You need to factor in the data center, power, cooling, hands-on support, future growth, etc.

    You're never just paying for the hardware.

  • UltraSane 17 hours ago

    Everything I've read says that self-hosting doesn't become cheaper than AWS for companies until you reach $1-$3 million per month spending when all costs are accounted for. Then there is the highly overlooked aspect that a good API like AWS has lets your expensive admins actually get things done hundreds of times faster than how most self-hosted IT can do. It usually takes months to buy and install additional capacity for most companies.

    • rixed 17 hours ago

      > good API like AWS has lets your expensive admins actually get things done hundreds of times faster than how most self-hosted IT can do

      Depends. APIs must take into account many more cases than our own specific use case, and I find we are often spending a lot of time going through unnecessary (for us) hoops. And that's leaving aside possible API changes.

      • UltraSane 16 hours ago

        I'm having a hard time imagining a situation where an using an API is slower than not.

himata4113 a day ago

I am still running ROME epyc cpus that I picked up for couple hundred and they're doing great. Power usage is not the best and singlethread is awful (-50%), but multithreaded performance kicks 9950x in the ass at around 90k vs 70k.

grzes 16 hours ago

the only thing that keeps me from migrating from DO to Hetzner is the lack of monitoring agent on the latter. its really nice to set up some hdd & cpu alerts from within cloud provider control panel directly.

lobofta 19 hours ago

Would love a GPU benchmark too especially for training and inference workloads

kev009 a day ago

It's ironic to see Oracle as a value play, but you couldn't bank on that indefinitely.

AtlasBarfed a day ago

Needs: network performance and costs, desperately.

I wouldn't mind SAN/non-local storage performance and costs too.

VM cost isn't where AWS gets you. It's ALLLLLLLL the other nickel and diming that they kill you on, especially since outbound data transfer costs a log of money to get off of the platform.

_pdp_ a day ago

You can extract a lot of value from bare-metal servers from Hetzner but you need to put some effort initially to get them going. That being said, it is not really that difficult. And, frankly, it a lot more fun.

  • navigate8310 a day ago

    Could you explain the "some effort initially" part?

    • jurgenburgen 18 hours ago

      You take care of disaster recovery, failing machines, the works. Getting a new (or replacement) machine from Hetzner takes longer than spinning one up on a cloud provider so you need to be able to survive some machines failing.

    • speedgoose 15 hours ago

      Also no encryption at rest. You have to setup LUKS yourself.

    • esafak a day ago

      The usual bare metal lack of amenities, that's all.

jeffbee a day ago

The only conclusion you can draw from this is nobody wants to use Oracle and they are trying to buy customers.

metadat a day ago

What about performance per dollar? Did I miss this part.

  • dkechagOP a day ago

    There's a table of contents near the top. Use it to jump to the performance per dollar.

puskavi a day ago

im glad to see that the 36€ im paying for 250/250 fiber is still competitive in grand scheme of internet

Imustaskforhelp a day ago

There are definitely ways to get some really good hardware from lowend shops as well.

Hetzner is pretty decent too actually but I actually think that OVH might be even cheaper while still being competitive, but in the case of OVH, I think one minor issue people can have is the setup fee at times but time to time like during black fridays or special deals, there are ways to get 0 setup fees.

If someone wants something stable, I think that OVH is pretty great as well and is comparable to Hetzner in terms of pure price but I have heard that given their scale, their support can be 50/50 but I recommend joining their discord and maybe even using (twitter oof) to message them as thsi was something which worked for someone on hackernews the last time something like this was discussed.

If you are okay with some more steal factor, use netcup.

You should also probably look at gaming type servers too. I know a person who is a one man shop who was more passionate about these stuff and did have high-end hardware (200k$) in investment in his provider.

Going more into the specifics of finance from provider side, usually the idea is that they recoup the costs in 5 years if running at decent capacity/having sold quite a bit but the first few years so as in example of the 200k$, they currently make IIRC 40k-60k$ per year, you have to somehow find your way to customers but that is being messed up because of AI ram prices as their costs to fix any broken hardware has absolutely skyrocketed eating much of their profits.

It's an extremely competitive market at times so if you are looking for something specific. You can actually ask about it in forums like lowendtalk, lowendspirit and these people/providers can respond matching what you are looking for price/performance in for and they can also provide test servers if need be and there is a unified form of benchmarking within this space called yabs (yet-another-benchmark-script) which you can ask for a provider.

But in all regards, if someone doesn't want to go through this hassle, I will just say that the same provider that I mentioned earlier had also in public mentioned how hetzner prices are pretty competitive and in all honesty I agree with that too.

If someone doesn't want to go through some/any part of this hassle, then hetzner/OVH are my go-to safe options. They are big enough that their downtime shouldn't be blamed on you for picking them while being really good in themselves. (Something which I have actually heard quite often mentioned on hackernews)

And within this space, if you don't absolutely know what you want, there is no generic winner as there are niches that are occupied. So if you ever want to find any alternative to hetzner for even more cheaper, its best that you first decide what are the things exactly that you want and then ask rather than see for pre-existing if possible.

Also if possible, make deals during blackfriday/cyber-monday. Not sure about Hetzner but in OVH/other providers, you can get recurring deals and the sever setup costs etc. removed sometimes.

  • winrid a day ago

    At fastcomments we split our fleet across both ovh and hetzner Incase of an issue so it's been interesting to compare the reliability etc. I don't have anything bad to say about either provider.

secondcoming 16 hours ago

Why is the n4d-2 spot price nearly double that of the n2d-2?

nodesocket a day ago

Awesome write up. I use EC2 t4g instances extensively as I tend to scale workloads horizontal (using Kubernetes, Docker) and admittedly the workloads aren’t cpu constrained. Would be interesting to see how t4g’s compare to other low end comparable VMs on GCP, DigitalOcean, Hetzner. Though I get the difficulty testing as it requires maxing out CPU credits (waiting) for accurate results before starting testing for each VM.

Looking forward to t5g’s whenever (if ever) they release.

doener a day ago

"Unfortunately, the [Hetzner] CPX22is available only in eu-central and ap-southeast, but if that’s OK with you it is the best value and fastest overall."

lostmsu a day ago

GCP (near the top) 3 years reserved 16 ARM cores + 120 GB costs + 1TB local SSD costs > $1k/month. It does not even match the specs of a Ryzen AI 395 Framework miniPC that goes for ~$4k AFAIK.

OsrsNeedsf2P a day ago

tl;dr Hetzner has best performance for the price. But also Hetzner just bumped prices like 30%

Thaxll a day ago

You can't compare VPS with VMs from major cloud provider, VPS don't offer anything beside basic compute.

Also virtualization from cloud provider is way better because they have custom hardware and software so you don't suffer from noisy neighbours for example.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection