Settings

Theme

Show HN: Scar – Static websites with HTTPS, a global CDN, and custom domains

github.com

363 points by cloudkj 7 years ago · 170 comments

Reader

fishtoaster 7 years ago

Looks very similar to my project https://github.com/kkuchta/scarr from a while back. It even uses the same acronym (I assume that's just a coincidence, since we both just picked a cool-sounding english-language word using the initials from S3, Cloudfront, ACM, and ACM.

At a glance: - Mine handles domain registration + ACM verification automatically - This one wisely uses clioudformation instead of api calls - This one does apex->ww redirects, whereas mine uses the apex and has no redirect

Seems pretty cool!

  • cloudkjOP 7 years ago

    Wow, that is a fun coincidence! Indeed, I was going for a catchy four-letter acronym in the same vein as popular stacks like LAMP or MEAN. Perhaps the fact that we both landed on the same components and permutation of components means that there's something there :)

    I also started off in the same manner of implementation - bash scripts wrapping AWS CLI calls - then stumbled upon the more straightforward, template based approach.

    • sova 7 years ago

      Ha, so cool that you both ended up in the same end of the Sphinx! =)

jaden 7 years ago

GitHub pages [0] gives you static sites with HTTPS and a custom domain without nearly as much complexity as this if you're looking for an alternative to Netlify.

[0] https://pages.github.com/

  • gtirloni 7 years ago

    I feel like Netlify has set the standard in this area now so I'm curious to learn what's different when I see these projects mentioned here.

    • nathankunicki 7 years ago

      I had honestly never heard of Netlify. I thought GitHub Pages was the standard, with S3 static hosting a second (more involved) option.

      EDIT: Googling suggests Netlify offers a build, deploy, hosting pipeline all-in-one box. Which is substantially more than any of the projects mentioned here. These serve a single purpose - simple hosting of static websites.

      • dguo 7 years ago

        Netlify can also be very simple to use. You can literally give them a folder of just HTML, CSS, and JS: https://app.netlify.com/drop

      • xondono 7 years ago

        Also, github pages require a public repository (or a pro account), Netlify + github doesn’t

        • sharcerer 7 years ago

          that's a bummer. I thought now that we have private free repos, websites could be hosted there. Still nice feature. I guess, the public nature definitely generates some trust and also is a good way of showing your work as Github works like a project showcase platform too.

      • jeremyjh 7 years ago

        GitHub will build for you if you use Jekyll.

    • weego 7 years ago

      in the case of scar you get to maintain your own AWS infrastructure. Which isn't really the goal of most people wanting a quick static site.

    • davchana 7 years ago

      I have always and only used Gitlab Pages, except one test site at Netifly, that too from a Gitlab Repo. Gitlab Pages are super easy, static, need one .yml file, one cname and txt dns record.

      • r3bl 7 years ago

        GitLab introduced their Pages relatively recently, maybe three years ago. The other two (Netlify and GitHub pages) are a few years older. I also vaguely remember something about it first being available in GitLab's enterprise edition, and later on being ported to the community version.

        Not saying that there's anything wrong with it in particular, but it arrived a bit too late to set the standard for anything. People compared it to Netlify even when it was first announced.

    • alfredxing 7 years ago

      Netlify is a great tool, my biggest issue with it (and why I continue to use GitHub Pages) is that the "Global CDN" is a cluster of DigitalOcean nodes, which I don't trust as much as e.g. Fastly in terms of performance and reliability.

      (Note: that info is from anecdotally looking at Netlify site IPs, I could be wrong)

      • type0 7 years ago

        In terms of reliability I would trust netlify any day instead of github, they're build to serve sites and github is not.

    • vidyesh 7 years ago

      Was about to say the same. Netlify is far less complicated and free to run a bunch of static sites with low to medium traffic.

  • jkarneges 7 years ago

    It even gives you a CDN. GitHub Pages is fronted by Fastly.

    • adam1210 7 years ago

      I tried using github pages with fastly, however it appears when a new site deployment is done, github does not invalidate the fastly cache, in addition to fastly independently caching resources on the site, which can cause broken site deployments for several minutes where it is using mixed cached resources from old and new deployments. I opened a ticket with github support, and they said it was expected behavior. It makes it partially unusable for me.

      • type0 7 years ago

        I experienced this as well, wonder if one could mitigate this with some site parameters though?

    • ainiriand 7 years ago

      Legitimate doubt:

      Fronted or frontended?

      • masukomi 7 years ago

        frontended doesn't make sense as a word. Fronted has many somewhat applicable definitions including

        * provide (something) with a front or facing of a particular type or material. * act as a front or cover for someone or something acting illegally or wishing to conceal something. "he fronted for them in illegal property deals" * stand face to face with; confront.

        • jobigoud 7 years ago

          As a non native speaker I came to the conclusion that in English it is possible to turn any noun into a verb. Not parent but "Frontended" made sense to me.

          • bradknowles 7 years ago

            I'm a native English speaker. Have been for over 50 years. Frontended sounded perfectly natural to me.

        • ainiriand 7 years ago

          Thank you for the clarification.

  • fjp 7 years ago

    I have two sites up using GitHub Pages + Cloudflare and it's super easy and enjoyable

  • Scarbutt 7 years ago
NateEag 7 years ago

Am I the only one who still hosts my own static sites on a plain old virtual machine?

It's pretty simple to configure nginx for static sites, and by doing it yourself you reduce vendor lockin to just about nil.

Even if S3 is massively cheaper, $5/month for a tiny VM seems like a small price to pay for being vendor-abstract.

I suppose S3 is way less likely to suffer a meaningful outage than my little VM, but how many 9s do my personal websites actually need?

  • clintonb 7 years ago

    Maintenance is my primary concern. I deal with software for a living. I want my blog to just work without me having to worry about maintaining the VM. Netlify makes this dead simple.

    I used to host Wordpress sites for myself and family members. I've now moved nearly all of those sites to Netlify (for hosting) and Forestry (for editing/CMS). I no longer have to worry about malicious hacking attempts, Wordpress updates, or anything else outside of the site content.

    Here is my post on this transition for those interested: https://dev.clintonblackburn.com/2019/03/31/wordpress-to-jek....

    • ehonda 7 years ago

      apt-get install nginx goaccess

      cd website

      cp * /var/www/html

      Yearly maintenance required: apt-get update, apt-get upgrade

      View traffic stats: goaccess -f /var/log/nginx/access.log

      I'd say its just as easy and seamless to do it yourself on a cheap VPS for a static website. HTTPS isn't that much extra work either.

      • mises 7 years ago

        Maybe I'm just a security nut, but I would probably also relegate ssh to a non-default port, allow key-only authentication, narrow ciphers, close all other ports (except 80, 443, and 53). Also fail2ban, sysctl tweaks (networking, disable coredumps), and a whole bunch of other things I have in a script.

        I've seen way too many people get their boxes trashed to leave an internet-accessible one exposed and unsecured.

        • KingFelix 7 years ago

          What are your thoughts on sharing your script? I have a few VPS and would love some new tools / proper setup. I have been learning as I go, learned a few day 1 things not to do, but would like to learn more about networking/coredumps. Cheers!

          • mises 7 years ago

            I'd have to clean it up first. I wrote it for a competition, and it does its job well; I may clean it up and improve it soon. Right now, it's a mess of a monolithic script.

            • KingFelix 7 years ago

              Excellent, well if you get around it to I would love to scope it out. Autodidact after being fedup with shared servers like, GoDaddy/HostGator/Inmotion, they were easy to use since I had no idea what I was doing, I moved to Digital Ocean and its been a fun learning experience. I love using command line and solving problems. Would love to be as tight on security as you are! Cheers

      • clintonb 7 years ago

        That's great that you have enough time and experience to consider all of this easy. As someone who works a bit higher up the stack, I rarely go as deep as configuring Nginx. This setup may take you a few minutes, but I usually end up spending an entire Saturday on stuff like this. Having done this for a few years, I would rather spend my free time on other things.

      • viraptor 7 years ago

        > Yearly maintenance

        I'd say continuous maintenance with response to specific issues. Also debian updates don't restart services which rely on updated shared libraries, which means you need to restart your nginx after openssl updates. Also restarts when kernel is updated. Also...

        There's really more to it than just an annual upgrade. You're likely not going to be affected if you ignore this, but why risk it?

        • ehonda 7 years ago

          Ok, I forgot to add 'reboot' to yearly maintenance :). And change the ssh port or consider a private key. But if its just for a personal static website, I wouldn't get overly concerned about being hacked. Assuming you have backed up your page, its another handful of simple commands to rebuild the whole thing anyway. They are also quite fun for other uses, like setting up a squid proxy, messing with an email server or irc server, just having a personal mini-cloud you can easily access from anywhere.

          • viraptor 7 years ago

            It's not about rebuilding if your website is defaced. It's the possibility of someone (for example) adding a client side exploit / throttled miner to your existing website. Without more monitoring, you won't know it happened, and neither will most of your visitors.

            • ehonda 7 years ago

              Has this sort of thing ever happened to you?

              • viraptor 7 years ago

                Yes. I can't remember the details of entry since it was decades ago, but the end result was JavaScript snippets targeting browsers appended to the end of index page.

                Adding extra servers like own cloud storage, email, IRC, etc. just expands your risk to more services (unless you internally separate them into namespaces/VMs, but then we're really far away from a "simple static hosting" territory)

                • ehonda 7 years ago

                  Lucky for me I dont use javascript. But that was decades ago right? Well.... relax! I think you are letting these fears get in the way of actually enjoying something quite fun. Perhaps the NSA has some lovely nginx exploits, but the script kiddies that trawl the web these days are laughable. (knock on wood).

                  • viraptor 7 years ago

                    It was decades ago because that was before I started working with IT security and stopped using single VM for mixed purposes and treat patching seriously. It's literally part of my job to not relax about those things and keep bringing them up, and remind people that they're not easy, annual apt-get updates.

                    You're right that there's fewer wormable issues these days. But the question is: does your usual approach to security allow you to stay safe when (not if) the next one happens. And feel free to continue in not-super-secure way for personal, fun things. Just keep in mind that there's more to the story and the more moving parts, the more you need to work to keep things reasonably secure.

    • seriocomic 7 years ago

      Your story is almost identical to mine - years of hosting on a VPS a bunch of small family/project mostly-Wordpress sites. I simply exported them and uploaded to Netlify+Github. I haven't really bothered keeping the connection from the back-end to a dynamic export but have kept those pieces in place for another wet weekend.

    • NateEag 7 years ago

      You make a good point clearly. Thanks for taking the time to do it.

      I guess I feel like the maintenance cost is worth the knowledge I gain from automating my own infrastructure, but I realize not everyone is interested in devops. I'll also note it costs me very little time - I don't remember the last time I had to do anything actively with it.

      Elsewhere in the thread I mentioned vendor lockin, which does concern me. I also worry about vendor monoculture - if everyone just uses AWS, they gain undue influence over the market, so in some ways I guess my stubborn self-hosting is a small gesture against that.

      I see a lot of people complain about how the internet has become a drab, uniform machine that treats people as eyeballs or wallets to be sacrificed to Moloch [1], little like the wild, free-spirited collection of small sites it was back in the late 90s.

      I think a lot of that is the price paid for centralization and funding, so again, self-hosting is a small way to fight back just a bit against that.

      1: Moloch in this sense: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

    • jasonlingx 7 years ago

      Did you consider Netlify CMS vs Forestry?

      • clintonb 7 years ago

        I did not. I've used Forestry for over a year. I was not aware of Netlify CMS until shortly after writing my post.

        • jasonlingx 7 years ago

          Ah. I was not aware of Forestry until I came across your post as well. Now I’m not sure which one I should go with.

          • vidyesh 7 years ago

            I use Netlify and can vouch for its simplicity. I have a few sites on it, some are deployed via bitbucket and some are simply drag-and-drop.

            I never used Forestry but by the looks of it, it looks more of an actual CMS and far too sophisticated than Netlify. Being said that it looks over engineered to me for hosting static websites. But if I wanted a CMS to host my client websites whom I have to hand over control, I would definitely give Forestry a try.

  • shshhdhs 7 years ago

    I disagree with encouraging people to do this. You are not accounting for a CDN here, like the post. A website on the HN front page went down yesterday on a $5 VM.

    And S3 just holds your HTML files, for super cheap. There’s no lock-in concern there. You can easily migrate to nginx in the future if you really want, but start with S3

    • danpalmer 7 years ago

      HN won't take a static website on a $5 VM down if it's set up even remotely correctly. Traffic to a popular link on HN is likely to get on the order of ~100rps max (more likely 1-10rps). Nginx will handle that with no problem.

      CDNs may make a site a bit faster, but for a static site it's unlikely to make much difference if you're on a good host in US/EU or central Asia. If you're hosting in Australia or Japan, maybe it might be a little slower than expected, but still totally usable.

      • tyre 7 years ago

        Completely agree. I think many people here regularly work on larger web applications in dynamic languages with heavy JS front-ends piling on dependencies.

        Nginx is unbelievably fast by itself, not to mention the optimizations that are completely unnecessary for a static blog. It's not going to be your blocker.

        If you're serving up 20MB of JS and inlined images on each page load, yeah, you may want to rethink that. But we don't need to get wild. My homepage is 9.2KB. Longer blog posts (e.g. [1]) can clock in at 20KB. HN won't take that down.

        [1] https://maddo.xxx/thoughts/what-the-hell-are-you-doing.html

        • mises 7 years ago

          Out of curiosity, what made you decide to purchase a ".xxx" domain for your blog? And do you regularly get comments?

      • type0 7 years ago

        Not to mention that most VPS providers to even speak about nowdays use SSDs

        For a personal site who the heck even needs a CDN, the only reason I might use that if I put photography website with huge shots or if there's a bunch of videos as well.

      • tty2300 7 years ago

        When I tested my $5 nginx vps could handle 16,000 requests per second over local host. Maybe at worst 10,000 per second over the network

        • danpalmer 7 years ago

          Yep this doesn't surprise me at all. A stock install of nginx with no tuning at all was reaching 26k rps on my 2013 MacBook Pro when I tested it years ago.

    • herbst 7 years ago

      I have front paged on HN and Reddit several times. Often 'only' using 5$ vms. However i was using cloudflare or at least nginx and proper caching settings.

      I run several hundred dollars monthly of infrastructure but my websites are nearly all on a simple VM for about 20€/month on Vultr right now.

      Web hosting only is expensive when people run badly optimizer infrastructure

    • tomschlick 7 years ago

      Putting Cloudflare in front of a VPS is pretty simple and gives you the same result so long as you are sending the correct cache headers.

  • WhiteOwlLion 7 years ago

    There are some low end VPS providers too that go as low as $1/mo. I usually stick to $2/mo or higher just for stability, reliability. Even you even got hosts like Hetzner Cloud, Scaleway (see European hosts) that provide great service, bandwidth, and VPS. I don't know why people use Amazon... I don't find their value proposition very good unless you need dynamic scaling for unpredictable demand.

    • mises 7 years ago

      That last unless is what the value prop is. Personal site is indeed just fine on a cheap vps (and I can also put up the occasional file), but AWS has better reliability and much better scaling. When you consider the opportunity cost of time, AWS can come out cheaper.

  • Symbiote 7 years ago

    I host my site on Apache running on an ARM board in my garage.

    I'll consider moving to a VM if/when the ARM board eventually fails, but it's been running for 6 years so far. I have 6TB of storage, which mostly serves as a NAS but includes about 200GB of photos for the website.

    There is no deployment process; the web root is mounted by NFS on my desktop. I can share large files with people just with "mv" or "ln -s".

    > how many 9s do my personal websites actually need?

    My router seems to crash every 3-4 months, and I need to reset it. There's around 15-30 minutes of power failure every year. I don't worry about this.

    • MarvelousWololo 7 years ago

      Sorry about the ignorance but how do you run it from your garage? What about bandwidth? Could you share the url? Also would you recommend me any guide to get started?

      • tty2300 7 years ago

        The usual roadblock in this process is getting ports exposed to the internet. In the best case this can just be done on your router configuration. In the unfortunately common case the ISP blocks you from doing this and the only solution is to change ISP.

        • Symbiote 7 years ago

          I've heard of ISPs blocking ports, but not in Europe. I just forward ports 80 and 443 to the server (and pinhole the IPv6 ports) and it's done.

          The upstream bandwidth is about 60Mb/s, which is fine for almost everything.

          • MarvelousWololo 7 years ago

            How can I do such thing? I'm in Europe as well. Do you have any guide to get me started?

            • dewey 7 years ago

              The only thing you have to google for is “Port Forwarding” and it’s usually a few clicks in your router interface. Then you just run the service you want on your computer / NAS / Raspberry Pi and tell your router to forward the port to your service IP / Port. If you have a dynamic IP at home you probably also do have to get a script or something to update your domain records if you want to point a domain to your home service.

  • tyre 7 years ago

    yes. I host my personal website (maddo.xxx) on a single EC2 instance with just nginx. It's easy. It's fast. When I want to over-engineer the shit out of it for fun, it's ready.

    Deploys? One line of `scp`

    scp -r -i ./certs/maddoxxxnginx.pem ./app/* ubuntu@13.52.101.21:/var/www/maddo.xxx/

    (that deploy script just bulk uploads everything, but that's fine for now. The whole site is measured in KB.)

  • jniedrauer 7 years ago

    I served a static website off nginx from a docker container for a while. At some point there was a breaking change and it would have taken 3 minutes to fix, but I didn't bother. Static hosting is a solved problem and there's not really a reason to do it yourself unless you just want to learn.

  • jakelazaroff 7 years ago

    How is there vendor lock-in for static websites, though? Can't you just take your files and go somewhere else?

    • NateEag 7 years ago

      Yep, you certainly can, which is part of their the beauty.

      Last I looked, though, you couldn't deploy to S3 without using tools that work specifically with it.

      I guess it's really not that big a deal, but I prefer the genericness of "I'm configuring a webserver and pushing my files to it."

      That process can be just about fully automated, even including HTTPS setup if you want that, and then you can use with whatever server provider you like.

      • jakelazaroff 7 years ago

        Depends on the tools! If you're manually copying files, there are clients (e.g. Transmit, which is what I use) that just treat it like any (S)FTP server. If you're using the command line, yeah, you need to use Amazon's CLI, although it's still basically a one-liner to sync the directory you want to publish.

        • NateEag 7 years ago

          Ah, yes, that does make sense. Thanks for explaining.

          I'm a fairly aggressive automator, so I forget that doing it by hand is actually an option.

  • dmitriid 7 years ago

    I’m old enough to serve my static site from nginx running on my server :) No virtual machines.

  • pushpop 7 years ago

    If it’s a static site and you own the domain then vendor lock-in isn’t really a problem regardless of if you use cloud services or not. Because you can just dump those files on a different provider and change your DNS entry. It’s not even remotely the same level of complexity as other services when people normally talk about vendor lock-in.

  • wolco 7 years ago

    I do. I'm surprised more don't on hn perhaps it speaks to something bigger.

  • klodolph 7 years ago

    I switched from S3 to a $5/month VM a while back and it is massively better.

  • mevile 7 years ago

    > you reduce vendor lockin

    I don't know why anyone cares about vendor lock in. It's either trivial to move an aws lambda to a google cloud function because you don't have a lot going on, or it's not trivial to move stuff from even your own servers to other servers because it's under huge load and you have considerable amount of data you'd have to migrate under complex conditions.

    Moving around is either hard or easy based on things that don't really have anything to do with vendor lock in.

    • codegladiator 7 years ago

      No, vendor lock in can mean a lot, from lets say even a simple plain API implementation, where one vendor might implement something(storage for eg) in a way where its not possible for the other vendor.

      I recently moved one of my k8s cluster from gc to aws, even terminology change can introduce a lot of awkwardness.

gvand 7 years ago

Nice project.

As an aside, I genuinely wonder under which circumstances a CDN will be useful for a static website nowadays. I have a static website that has been on the HN homepage a few times and got picked up by the Chrome mobile recommendations and a nginx/https with slightly tweaked configuration never had a problem handling the traffic even on the smallest DO droplet.

Edit: Thanks for these replies.

  • Xylakant 7 years ago

    The CDN makes the site load faster by caching the content on edge nodes close to the client. It’s not for taking load off the origin, but purely for network latency.

    • toredash 7 years ago

      And by caching it you will reduce traffic to origin.

      • Xylakant 7 years ago

        Certainly, but for static pages the traffic to the origin is not a primary concern except for really large amounts of traffic.

  • zachruss92 7 years ago

    What I like about static sites is that you can serve the site in its entirety from a CDN. So you can literally just CNAME www.yoursite.com to yoursite.gitlab.io (or w/e static site host you use). This dramatically cuts down on latency worldwide. It also removes your web server as a single point of failure for short-term outages.

    • dvfjsdhgfv 7 years ago

      > you can literally just CNAME www.yoursite.com to yoursite.gitlab.io

      After so many years I still can't really understand how easily people hand over almost complete control over their site to someone else, just because everyone else does. It's like handing over your e-mail account passwords when LinkedIn started. Yes, CloudFlare, Google and others are helping you, but there is a price to pay that might not be immediately visible.

      • acdha 7 years ago

        It seems pretty different from a password because you're not giving control of your domain: if they broke their contract, you could take it back at any time.

        That's the other odd part about this complaint: you're trusting a company like GitLab not to break their terms of service, which is a potential factor to consider but also one where they'd have severe negative outcomes to their business if they went rogue. Since you're already trusting a number of other parties, why is this one so much scarier?

        • nybble41 7 years ago

          > It seems pretty different from a password because you're not giving control of your domain: if they broke their contract, you could take it back at any time.

          You are giving them everything they'd need to obtain a DV certificate for your domain, though. You can stop them from using it at any time just by changing the DNS records, but you'd need to wait at least two years (825 days for maximum TLS certificate duration) before you could be certain any certificates they had been issued before that point had expired.

      • cortesoft 7 years ago

        How would you do it without trusting a third party?

    • donmcronald 7 years ago

      You can get some nasty cold starts though. Here's a domain I have that doesn't get any traffic (literally zero):

      https://imgur.com/a/dVPQYKC

      The first hit is brutal. I won't say the CDN since I'm not an expert, but it doesn't take long to go cold (minutes) and once it's cold even the cached hits are 400ms.

      Is 400ms really a dramatic reduction in latency?

  • tedshroyer 7 years ago

    The technical reason to use a CDN with aws s3 is so that you can have a custom domain name with https. s3 will do http custom domains, but to get https you proxy it. In this case, you can think of Cloudfront as the proxy.

rsweeney21 7 years ago

We use a combination of Netlify + Webflow + Hugo for our website (www.facetdev.com). With that we get a global CDN and our website will never go down.

Netlify has been awesome and it made it stupid easy to combine our www site on Webflow with a hugo static blog in a subfolder (/blog). This might be my favorite web publishing workflow ever.

If you haven't tried Netlify yet, definitely give it a look.

  • triangleman 7 years ago

    Does the webflow save to a git repo?

    • rsweeney21 7 years ago

      Yes. It also provides a really nice UI for building our www site which we like to rev frequently. Webflow is the bomb if you are familiar with HTML and CSS. Super clean HTML, total control over all the css attributes, drag and drop builder.

      • donmcronald 7 years ago

        Is it a completely hosted service? It looks cool, but I'd be reluctant to use it if it's a subscription to an online tool where I have to pay forever. Is there a standalone version of that editor?

        • rsweeney21 7 years ago

          It is hosted, but you can use it free forever if you have Netlify in front of it and use your free sitename.webflow.io URL as your origin server. You can also export your site as static html if you want.

  • gcb0 7 years ago

    looking at webflow, i have flashbacks (not sure if the good kind) of macromedia fireworks in the 2000's.

singingwolfboy 7 years ago

I wrote a tutorial for how to do all this setup manually, if you prefer: https://www.davidbaumgold.com/tutorials/host-static-site-aws...

Sometimes it’s nice to understand how all the pieces fit together, instead of using an automated system!

djsumdog 7 years ago

How much does this cost? I put in some more effort to setup my HAProxy and nginx containers on a Vultr node, but I get LetsEncrypt for free, so I'm just paying for a Vultr node (or DO droplet) and the price of the domain name:

https://github.com/sumdog/bee2

  • 1023bytes 7 years ago

    My estimate is about $1.5 a month, so definitely less than a full VPS, but it depends on the traffic and how much data you store.

  • cloudkjOP 7 years ago

    It costs at least $0.50/month but probably not much more than that for most small to medium sites.

    The $0.50 is the monthly cost of the Route 53 hosted zone; the CloudFront and S3 costs typically amount to pennies, but of course it depends on traffic.

whalesalad 7 years ago

I have a two-line Makefile that with one target that sync's my website with an S3 bucket. Deploys are instant. The rest is handled by Cloudflare an AWS. The sheer number of moving parts in this system is outrageous for a static website. A fun project for sure, though.

  • cloudkjOP 7 years ago

    I think the complexity for this setup is about the same. Once the different AWS services are provisioned during the initial setup, subsequent deploys are quite straightforward. For example, I have a three-line Makefile target for Jekyll sites that looks something like this (using Docker with a local `aws-cli` image wrapping the CLI):

        docker run --rm -e "JEKYLL_ENV=production" -v $(PWD)/src:/srv/jekyll -it jekyll/jekyll:3.8.5 jekyll build
        docker run --rm -itv $(HOME)/.aws:/root/.aws aws-cli aws s3 sync src/_site s3://www.<mydomain>
        docker run --rm -itv $(HOME)/.aws:/root/.aws aws-cli aws cloudfront create-invalidation --distribution-id <mydistribution> --paths "/*"
kaiku 7 years ago

Bundling service config and launch makes the whole process easier, for sure. There's also more than one way to configure this depending on what your needs are, so it'd be cool to have a few different versions of SCAR.

I started with a setup similar to your diagram and tweaked it when I realized S3 didn't serve index.html when the URL was just the parent "directory", i.e. example.com/foo/ doesn't resolve to s3://example.com/foo/index.html. To get this working I had to write a bit of JS in a Lambda function and deploy it at the edge of my CloudFront distribution to do some URL rewriting.

Given that's the behavior most people expect, might be worth considering?

  • IanCal 7 years ago

    I think that behaviour should be handled with index documents in S3, without the need for lambda: https://docs.aws.amazon.com/AmazonS3/latest/dev/IndexDocumen...

  • cloudkjOP 7 years ago

    That should indeed be the default behavior out of the box with the way the S3 buckets are configured. I have a couple Jekyll sites deployed this way, and a request to the parent directory does get served by the contents in `index.html`. Are you not seeing that behavior?

    I'd definitely like to add more variants of the default stack. At the minimum, I'm sure there are folks that prefer `www` redirects to the apex domain, or removing the `www` subdomain altogether.

huphtur 7 years ago

Recently moved some static sites from S3 to AWS Amplify Console. Super easy setup and even easier maintenance with the Git-based workflow: https://aws.amazon.com/amplify/console/

SadWebDeveloper 7 years ago

Anyone have an a average monthly fee for using these as hosting solution? last time i ran the numbers using all that services go from 5 to 10 USD per month and was better to use amazon lightsail (3.5 per month) or other cheaper alternatives at lowendbox

iBelieve 7 years ago

For anyone looking for a hosted solution, https://surge.sh/ is super nice and simple without any of the complexity of managing the stack yourself. Deploying uses one simple command, and you get hosting and custom domains for free, though I believe SSL is paid for custom domains. (I'm not affiliated with Surge at all, just a happy user.)

  • cloudkjOP 7 years ago

    I was actually wondering that myself: Is there interest in a hosted service? It'd be quite similar to (as many comments have suggested) Netlify and the one you linked to.

    I was mostly going for a DIY solution since I wanted to "own" the bits being deployed while remaining as close to the infrastructure as possible. Providing a hosted service somewhat moves away from the DIY spirit; I suppose additional tools/UIs could be offered to simplify setup and deployment and still run everything directly on AWS, but at that point one might be inclined to just move to one of the other hosted solutions for the simplicity.

jareware 7 years ago

Same feature set - plus a few extras like Basic Auth support, custom headers, preventing direct access to the underlying S3 bucket - implemented as a reusable Terraform module: https://github.com/futurice/terraform-utils/tree/master/aws_...

tamalsaha001 7 years ago

How is this any different from Firebase hosting? We have been using it for a while with no problem. Also comes with a very generous free tier.

timClicks 7 years ago

Sorry to nitpick, but "Copyright © 2019" isn't a "license". It's not even a full copyright declaration without listing an owner.

  • pbhjpbhj 7 years ago

    >"a full copyright declaration"

    A what? In the majority of the World copyright has been automatic for about 140 years.

    How you get copyright is you make a work. No need to put anything else on it. IIRC there are about 3 countries that aren't signatories to the Paris Convention.

    In USA you can file a notice in order to get better treatment in court, but it's not been required for 40 years or so, is that what you're referring to?

    FWIW the license is very clearly MIT, https://github.com/cloudkj/scar/blob/master/LICENSE.

  • tide_ad 7 years ago

    Sorry to nitpick, but copyright declarations are a thing of the past in many nations with copyright protections automatically conveyed upon creation, registration only necessary within a short time after infringement was detected, with registration serving to only maximize the monetary sanction the government will levy on your behalf.

    and regarding license, they have the MIT license added to the repository

    • timClicks 7 years ago

      That's fine, but this is purporting to be a copyright declaration. I know they're unnecessary, but if you are got to add one, you should do it properly.

morenoh149 7 years ago

great job! I wish more projects have 1-click deploy to Heroku, aws, gcp or azure. This is a good habit more people should get into.

Running this project on aws can give a cloud beginner an interesting way to expose them to many concepts. Now I just have to figure out what static website I want to run in this!

Please do the same for running your own scalable wordpress install!

  • donmcronald 7 years ago

    The technology is awesome, but I won't use Cloudformation, Azure Resource Manager templates, etc. until AWS, Azure, etc. support spending limits. Getting into the habit of clicking "Deploy Stack" when you're credit card is attached to an account that allows unlimited spending seems risky to me.

t0astbread 7 years ago

What benefit does this have over Netlify?

anvarik 7 years ago

you lost me at GoDaddy

blairanderson 7 years ago

TL;DR this is an AWS stack with 10 AWS services required to build/deploy a static site with HTTPS/CDN

I will be staying with netlify

  • d-sc 7 years ago

    I just built my first static page since middle school this last weekend using netlify and a static site generator [Publii]. I was amazed at how simple and fast netlify is.

    I’m confident I could figure out how how to do something much more complicated. But I want to focus on other things and it’s nice to not have to think about it.

  • vnglst 7 years ago

    And, if I’m not mistaken, it’s also missing the 1-command-deploy-a-completely-new-version-but-also-keep-the-current-one feature.

    It’s still a cool project though, since it shows exactly how many problems Netlify solves for us

pier25 7 years ago

Off topic, but what did you use to draw the flow diagram?

  • cloudkjOP 7 years ago

    The AWS CloudFormation console has a "Designer" tool that allows drag-and-drop creation of template files, and also visualizes existing JSON or YAML template files with these diagrams.

bsingh4 7 years ago

You've only taken care of the surface-level complexity with AWS. Want to do something more like add a header to the response? Well then, create a lambda, deploy it to the edge, and pay per page view. This is something Firebase is much more elegant at - the initial deploy, and then evolution and addition of features geared to static site deployment.

myresume 7 years ago

Try out https://freepage.io is much easier to use than github pages. You don't even have to create an account, verify email and all that nonsense to use it. And it has social media built in to get your page out there in to the world.

js2 7 years ago

The lambda stuff is there just to upload the welcome.html?

Also, maybe consider configuring a logs bucket for the cloudfront logs?

paulgb 7 years ago

This is cool, I'm glad somebody built this! I love netlify but I worry about vendor lock-in.

  • reificator 7 years ago

    I'm usually paranoid about vendor lock in, but I can't join you on this one.

    Netlify assumes a version control repository that you can pull from, run a build step, and then host static files from. The build tools are open source, the output is static and trivial to download and rehost, and the repository is git meaning one clone is all you need to port to any other service.

    Where exactly is the vendor lock in?

    • StavrosK 7 years ago

      Nowhere (unless you use Netlify-specific features like Lambdas, forms etc). You can just copy your static site elsewhere and you're ready.

    • paulgb 7 years ago

      It's not so much my code that is locked in, as that netlify has spoiled me by making deployment so streamlined that it would be hard to go back to manual deployment. This gives me another option, which I appreciate. That's all I meant.

      • reificator 7 years ago

        Netlify makes things easier, but S3 + Cloudfront + Route53 (Or insert favorite cloud vendor here) for a static site is not that far behind.

  • quaffapint 7 years ago

    Why do you worry about vendor lock-in with netlify? They host static sites, so you are free to go anywhere. Unless you happen to be using their other services which this doesn't address anyway.

    • chrisfinazzo 7 years ago

      This is my concern in a nutshell, unless someone has a tool that spits out a properly formatted .htaccess so I can migrate my HTTP headers and redirect rules.

      Netlify's playground is easy to use for setting this up, but I'd also like to have this available in a standard format - just as an escape hatch in case I need it.

      https://www.htaccessredirect.net is there, but I'm thinking even less configuration if that's possible.

  • gk1 7 years ago

    Netlify doesn’t hold your files; they stay in GitHub or whatever repo you decide to use.

  • holtalanm 7 years ago

    this doesn't solve vendor lock-in. You're just trading one vendor for another.

srathi 7 years ago

How much should this cost per month in AWS billing for a small static website?

  • jboynyc 7 years ago

    Of course that depends on what kinds of assets you serve and how many hits you get, but to provide a ballpark, I had a small personal site with few media assets on S3, and it consistently cost me US$0.12 per month. I think once it cost me 14 cents and I thought, "Wow, I must've been popular last month!"

    I didn't run analytics so I can't say how many hits it got, but traffic was probably fairly average for a personal site.

  • enigmango 7 years ago

    Less then $1/month if you've got low traffic. I haven't used this, but I have this type of setup for a static site that uses Hugo.

    - S3 costs: $0.10/month

    - Route53: $0.50/month ($0.50 per hosted zone)

    S3 costs could be lower - I have other buckets with stuff that count towards my cost.

shapiro92 7 years ago

This is highly complex for no reason. GitHubPages, Netify provide you with easy to use custom static page hosting.

Your abstraction is nice, but the learning curve for someone is incredibly high for such a setup.

vijaybritto 7 years ago

This seems like a nightmare to setup and maintain for a new comer. Netlify lets us setup things in a whiff. This is a nice project but not for anyone below intermediate.

dlhavema 7 years ago

Yeah, super cool! Thanks, this was the only part I was unclear about in connecting the domain to the bucket easily..

adontz 7 years ago

I did the same on azure a few days ago and it was much much easier.

faheel 7 years ago

Just use Netlify.

ryanisnan 7 years ago

Other than the notion that all traffic should be served over HTTPs, if you have purely static site, why the big fuss?

  • thethirdone 7 years ago

    Without HTTPs links could be replaced and executable file downloads could be replaced with malware.

  • woogley 7 years ago

    There are plenty of reasons even beyond privacy and MITM content changes. Supposedly HTTPS is better for SEO and also there are browser APIs[1] that only work in HTTPS context

    [1]: https://developer.mozilla.org/en-US/docs/Web/Security/Secure...

    • ryanisnan 7 years ago

      If you're building an application that requires the uses of these APIs, I'd hardly call that a static site, but maybe that's just me.

      • thethirdone 7 years ago

        Static site generally just refers to the server serving static files.

        I have several simple games hosted on Github Pages using the storage API which is on that list.

      • Avamander 7 years ago

        HTTP/2 is one of those things that only works over HTTPS so it can absolutely be a static site being served over it.

  • ummonk 7 years ago

    You can enable HSTS preload and prevent spoofing.

  • dumboluzz 7 years ago

    people sniffing on what kind of static content you are consuming. think public unsecured wifis or sensitive material.

    • ryanisnan 7 years ago

      True. I suppose for many types of content that could be a legitimate concern.

tempsolution 7 years ago

Ha I just did the exact same thing yesterday evening... Funny how stuff like this hits place one in HackerNews these days.

romanovcode 7 years ago

How to host static website with HTTPS, a global CDN and custom domains for free:

1. Setup public repo with Hugo project

2. Add Travis CI integration with GH Pages

3. Use CloudFlare for free SSL + other goodies

Why would anyone need this?

  • whereareyouwow 7 years ago

    Totally agree! Only thing is GH pages might limit your size at some point... Wish they would introduce a per GB pricing and allow you to scale. That would make it a permanent solution.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection