Settings

Theme

Ask HN: Should I use my old computer instead of cloud?

31 points by me_me_mu_mu 4 years ago · 21 comments · 1 min read


I’ve got an old (but still not bad for dev needs) quad code with 16gb ram. This is quite a lot for running various small containerized services. However I’m wondering if this is considered a bad practice.

Cloud machines for something like this would be >$100 a month.

What security measures and other considerations would I need to keep in mind if I go this route?

For example I have a Emailer service that sends updates to users (needs to connect to my remote hosted DB)

manfre 4 years ago

Why would you rent a cloud instance that is as overprovisioned as your home computer? Price compare based upon what you'll actually use, not what you have available. You'd be surprised how much you can run on a few small VPSes or a container hosting service like fly.io

  • mikecoles 4 years ago

    A couple of years ago, we did calculations that showed for the price of a year of hosting, we could buy the equipment outright. This price was only for hardware, not electric, cooling, connectivity, etc though.

yellowapple 4 years ago

I've seen enough small businesses with "servers" built from desktops and laptops (and have even been the one doing it, in cases where the one needing it can't justify budgeting for a proper server but does need "something" and is fully aware that I offer nothing even vaguely resembling a warranty) that I'd say it's doable. Yes, it's "bad", but it's a massive money saver and "bad" is better than "nonexistent".

The big risk will be around hardware reliability. A desktop or laptop just ain't built like a server is. Knowing this, redundancy is key. Hell, it's key anyway, but it's even more acute of a need when you're using consumer hardware - especially since your average desktop/laptop won't have redundant power supplies and hot-swappable drive caddies and multiple NICs and all the other goodies that preserve server uptime. Hardware reliability is one of those things that PaaS providers largely abstract away for you, so keep that in mind, too.

My usual strategy for a "poor man's server farm" is to treat entire machines as disposable. Ain't like most laptops have the physical connectors for RAID anyway, and safeguarding data is what backups are for. If any component starts to fail, everything's migrated to a different cheap piece of shit and the old one gets thrown on the "fix it eventually" pile to be either repaired or e-wasted.

superb-owl 4 years ago

If you've got external users, I'd go cloud.

Hardware dies unpredictably. The cloud providers have figured out how to be highly resilient to it.

If you use your own hardware, there's a XX% chance you're going to have a really bad day sometime in the next year.

znpy 4 years ago

> However I’m wondering if this is considered a bad practice

It depends who you ask.

I do the same at home and it works like a charm, but I have no users beside myself and my SO.

If you have paying users and SLAs you'd better get that $100/month cloud machine or have a disaster recovery plan ready (and tested).

> What security measures and other considerations would I need to keep in mind if I go this route?

the usuals... disable password authentication, configure the firewall, do not share /var/run/docker.sock with containers, don't shut down SELinux, run periodic backups and test recovery procedures periodically etc.. Normal sysadmin stuff.

  • fm2606 4 years ago

    Same.

    My website is mainly just for myself. It is run a couple of RPis. A RPi 3B+ for nginx and my python flask app. A RPi4 4g for DB with external hdds.

    A few years ago I bought some old servers, one with 6 cpus / 12 cores and 128gb of RAM. I was ecstatic but the noise, heat and I couldn't justify running them 24x7 for the stuff I was doing so I went with the Pis. Again, for what I need and do with them they fit my purpose very well.

    A couple of weeks ago I signed up for Kamatera's 30 day free trial to use them as a reverse proxy. I had been using NOIP to route to my home but it exposed my home IP. NOIP is mainly if your ISP rotates your IP address but mine has been static for more than 4 or 5 years. For less than the price of NOIP per year I can have the bottom tier VPS ($4/month) and control everything myself. Once the free trial is done I will continue using them.

    Is it fast? Nope but I don't care. It is for me and nothing I'm doing requires instantaneous access. There is a delay when the HDDs first spin up but after that it's fine.

    Like another poster said maintenance needs to be done and I do it on Sundays and manually update all my Pis. Why manual, eh..I enjoy it. This weekend I made backup images of my SD cards. My cards have been running for 3+ years and only recently had one fail. I view my log files every few days or so; again I kind of enjoy it and it gets me away from developing and thinking more about sys admin.

    A couple of months ago I set up a mail server for shits and giggles. This was more so I could scan from my old Ricoh Aficio 1515mf and email it to myself which I got that part working, it is just in bound from outside the network that isn't working.

    If this were for a business I would almost certainly go cloud, or if I had the cash buy a decent server and co-locate. Again, just because I find I enjoy that aspect of programming / dev / admin stuff and I prefer to have as much control as possible (co-lo > cloud ... for me, may not be for others).

    I find being able to do this stuff helps me as a remote programmer who has full access to cloud servers at work.

  • bdavisx 4 years ago

    >If you have paying users and SLAs you'd better get that $100/month cloud machine or have a disaster recovery plan ready (and tested).

    I'd say go with the cloud and have a disaster recovery plan ready and tested.

jeroenhd 4 years ago

You can self host if you make sure to monitor for signs of hardware failure and ensure that replacement hardware + good backup strategies are always ready if you rely on it for hosting services for customers/other people.

If it's just for you and you have the hour or so per week to dedicate to updates and maintenance, then I don't see why not. I run two servers in my room and they've worked fine with minimal maintenance so far, except for a few old hard drives that needed replacing and a fan that I couldn't get replaced because of a screw I stripped years ago.

There are quite decent cloud machines out there if you just look beyond the big guys like AWS and GCloud. https://contabo.com/en/vps/ has decent servers for cheap, as does https://www.hetzner.com/cloud

If your emailer service makes direct contact with destination SMTP servers, then running from home is probably not an option. If you use an external SMTP server to deliver mail to the destination servers then this won't be a problem.

kavehkhorram 4 years ago

I agree with the other commenters that overprovisioning (or underprovisioning) is a concern with the cloud, but the public cloud has long been more secure than on-prem data centers [1], [2], [3].

As for the cost, Reserved Instances can dramatically reduce your spend, with the caveat that you can get locked in 1 or 3 years. My company, Usage.AI, built a platform to solve this problem by automatically buying and selling Reserved Instances to get the price and flexibility benefits in one [4].

[1] https://www.infoworld.com/article/3010006/sorry-it-the-publi... [2] https://blogs.oracle.com/cloudsecurity/post/7-reasons-why-th... [3] https://cloud.google.com/blog/products/identity-security/ent... [4] http://usage.ai

GianFabien 4 years ago

You need to get some metrics for each of your containers. Rather than choosing a cloud machine that is similar to your current computer, consider micro-instances to host each of your services. That way, you can upscale only those instances that require additional capacity.

At $100+/month, I would like to think that you are generating sufficient revenue to cover those costs plus a return on your time and effort. $100/month over a year pays for a rather nice notebook.

dotty- 4 years ago

I'm sure I'm missing something, but I feel like the main reasons to choose cloud over self-host are availability and bandwidth. The data center is probably more reliable for uptime than your home. If neither of those are your problem, then I suppose you should consider segregating the network that machine is running on from the rest of your devices in your home. So your server can't connect to your TV in the living room or something, for example.

  • PaulHoule 4 years ago

    Even back in the 1990s I had so many friends who were obsessed with the idea of running a web site from a server at home.

    For most people their internet service is not that fast... If people like that got their wish they would wind up praying that their web site never gets on Hacker News, or they will after the first time it does.

    • jeroenhd 4 years ago

      In an age where gigabit upload speeds are no longer impossible to attain I wouldn't worry too much about uplink performance. Sure, this is a regional thing, but if you set your website up right you can serve more than you'd expect with just 100 or even 50mbps up. An HTML page with resources doesn't need to be more than half a megabyte in size and 50mbps up still nets you full performance for 12 convurrent users per second, something many websites will never even need.

      • wdfx 4 years ago

        It's not just about raw bandwidth though. Yes my domestic line has 20+Mbps upload but the latency and routing distance from some other domestic endpoint on the other side of the world is going to be terrible. I could go to great lengths to make sure that what I serve is the minimum amount of bytes but you'll be waiting multiple seconds for the connection to be established. Hosting the same data in a cloud or data centre would do the audience a much better service overall.

t312227 4 years ago

IMHO,

the key-point here is to automate the setup/deployment/maintainance of your system and application, so you have a certain degree of independence of the systems running your stage in question.

for development: use whatever you see fit - who cares if the system is down, you got automation to set it up elsewhere if for example the hardware dies.

for production: this is "the other side" of the story, here you are aiming for availability etc.

use automation - which you already "showcased" / have as some kind of a PoC for your development-systems - to be able to quickly recover from major faults.

ad db: use a local test-database on your development-system.

you don't want to access prod for development-tests!!

idk. make a dump of prod every now & then - if prod-data is not sensitive -, or generate sufficient test-data, etc...

br, v

rubatuga 4 years ago

Good hardware (reliable SSDs and HDDs) and backup solutions is the first step. Next you will need a proper internet connection, for example your ISP might block ports or put you behind a CGNAT. I created Hoppy Network to assist with this last step. It provides you a clean IPv4 and IPv6 address over WireGuard. Some networking background is recommended if you want to route multiple services.

https://hoppy.network

908B64B197 4 years ago

If it's just for personal use sure.

But the bottleneck will quickly become bandwidth for anything with users other than you.

dvnguyen 4 years ago

What’s your best way to talk with you? I’m building a cost effective dev cloud vms that might fit your use case. It has a granular usage based metering and you’ll be charged almost nothing when the VM is idle.

mPReDiToR 4 years ago

Yes.

NextCloud is your data on your hardware.

Encrypt it, back it up to the cloud (someone else's computer) as offsite storage, but don't give read/write access to your calendar, contacts and emails.

shreyshnaccount 4 years ago

You can almost always do some smart coding and use an rpi for most things

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection