Document reasoning to block DNS to 127.0.0.1
github.comNot a fan of EasyPrivacy. It seems to be run by trigger happy people with pretty limited understanding of the web.
They once blocked workers.dev (Cloudflare Workers) wholesale[1], resulting in a huge flood of issue reports for a few FOSS services of mine. Guess they've never heard of public suffixes.
This one appears to be someone reading about DNS rebinding attack somewhere, then pulling the trigger without understanding it. Or maybe I even overestimated them, DNS rebinding only came up as a justification very deep into the discussion.
To make matters worse, clients using these block lists have update frequencies all over the place, so you can never be sure when your stuff gets unborked for all your users even after they revert changes like this.
Edit: Actually, DNS rebinding was brought up by the issue reporter, all that the committer presented was some handwavy "give me a reason I shouldn't block this"... How about checking the sites you blocked for a reason.
This is the explanation given by the person who committed this change:
> Is a security issue, imagine if you're running a webserver on a site decided to access it from outside, whether to fingerprint you or act nefariously. There should be no reason why a third-party access localhost. But do tell me, why we should we trust sites accessing localhost.
It makes no sense to me. Unless someone knows of a better reason, I'm of the opinion that this change should be reverted.
I've debated that reasoning in the security field and it just goes round-and-round in circles. There are legit cases to avoid this but less about security and more about scanning tools that poorly detect attempts to the loop-back as DNS rebinding attacks vs. an actual DNS rebinding attack which requires malicious code. So avoiding this can avoid some false positives from 3rd party scanners and having to get into silly arguments with people. There are some other obscure edge cases but they delve more into hypothetical scenarios and people can never seem to show a real world implementation of their theoretical attack. Besides, there is nothing stopping anyone from pointing any domain to 127.0.0.1 on their recursive servers or via /etc/hosts so if this is a risk then somebody is doing something very wrong.
Funny story though, I used to park wildcard sub-domains on 127.0.0.1 just to keep the bots off the load balancers and a customer said that we were running a vulnerable version of PHP. I said we had no installations of PHP anywhere in production. Turned out they were scanning one of my parked wildcard sub-domains and effectively were scanning their own laptop which had some old PHP web app running on it. That also told me they were also not validating certs.
> park wildcard sub-domains on 127.0.0.1
that sounds like a good practice -- why is this not done more often I wonder.
EDIT: on a second thought i am not so sure. I am not an expert here so I will not try to guess :)
In my experience most DNS admins abhor the idea of putting private IP addresses in public DNS space and it's simply not even an option they consider. I've used weird DNS tricks like this for years and never really encountered any issues, though. I currently have both my wireguard and private IP networks published to public DNS to make my life easier, for example.
I allow it. I use A records to set individual subdomains of my personal domain to individual Tailscale IPs. Then, when Tailscale is connected, all is well. Is this worth a telling off?
Private IPs should not be publicly routable. For one, you are no longer standards-compliant. If you want to depend on the IPv4 standard, you've already broken what you're trying to depend on.
I was also confused on what's happening until I read this comment: https://github.com/easylist/easylist/issues/16372#issuecomme...
> Stupid example of why it may matter: say you installed LAMP on your computer several years ago, you're not using PHP frequently, and you haven't kept it up to date (so it probably contains a few nasty security vulnerabilities), but it still opens up on boot and listens on localhost.
> Now you open some website, it accesses 127.0.0.1, check for the LAMP vulnerability and exploit it if found. Congratulation, you have been pwn3d.
Interesting but very rare scenario?
I've got to ask - if you're doing local dev or whatever, what's so hard about turning ublock off for your dev site?
The logic seems ok for "out in the wild" cases - but ublock still let's you override stuff if you know what you're doing.
At a previous job I worked on a web-based file sharing app, and to share individual files we used a generic share icon from the font awesome project with appropriate labels.
We got a bug report from someone in the company that the share icon was missing, and after investigating we saw the other icons (for edit and delete) were visible but not sharing. Long story short, they used an adblocker with a setting to block social media sharing links (Facebook likes, Twitter follows, etc) and it was also removing our icon.
In this instance, maybe it'd be fine to turn it off for localhost and keep it on for staging but still...
In my experience, ublock breaks an awful lot of sites you have to use, such as your payment processor website, sales channel manager sites, etc. For me, at least, if a website seems to not be working quite right, I disable ublock and try again after a refresh.
I guess you could call those "trusted" sites perhaps.
Generally you want to use what your users use. Running common adblock lists as a dev can help identify /weird stuff/ early.
Plus genuinely the privacy list is good. Just not when it stops the ability to build good.
I thought chrome was now blocking this anyways unless you specificly opt in with CORS.
It’s called Private Network Access and it’s still behind an experimental flag for now.
https://wicg.github.io/private-network-access/
https://developer.chrome.com/blog/private-network-access-upd...
Also ws/wss has no SOP/CORS which could be a problem, but that has nothing do with the domain blocking here.
The link you gave said it released 2 years ago in Chrome 94.
That's why I said "behind an experimental flag". Search for "private network" in chrome://flags.
Oh I see what you mean. The default blocking behavior in Chrome 94 only applies to public non-secure contexts. We’re talking about a stricter form applied to secure contexts here.
While I think that I somewhat agree with their reasoning, someone must also say that we are quickly in multicast territory here.
Should websites be allowed to resolve airprint or airdrop based devices, given the history of CSRF vulnerabilities in consumer routers? Probably not.
Devs seem to confuse that most humans are not developers, and therefore easylist's decision to do so has that kind of context.
The point of those lists is to block away access to local domains so a malicious website that got through the filters isn't able to pwn your whole network.
And if we are discussing whether or not websites should be allowed to access the local network, then you are probably someone who doesn't give a damn about securing those devices anyways.
I must be in the minority here thinking this move makes total sense.
Once I had to apply a firmware update to a device (don't remember what it was). I had to install some vendor's software but surprisingly the instructions said to then visit a public URL like fwupdate.vendorswebsite.com, which indeed applied the update to my physically connected device.
I dug into it and turns out the software launches a local web server listening on localhost which exposes an API that the website accesses over plain CORS HTTP to localhost. This web server talks to the device connected over USB.
This felt like an egregious breach of privacy--public websites should not be allowed to arbitrarily exchange data with locally-bound servers. Even though this was the intended design of the firmware update process, my browser really should not have let this occur by default without my explicit opt-in.
This is pretty unrealistic even given the maintainers example:
> Is a security issue, imagine if you're running a webserver on a site decided to access it from outside, whether to fingerprint you or act nefariously. There should be no reason why a third-party access localhost. But do tell me, why we should we trust sites accessing localhost.
That web server would need to be configured for CSRF and CORS of that specific domain as well. If this were an attacker then it wouldn't take long to seize that domain.
To fully extrapolate that, the server would only be accessible by the users machine. There's no implication of "third party access". Maybe if they were demanding the website to have a higher classification of verification for their certificate I'd understand, but frankly without an example of where and how this is a vector I'm skeptical.
The example is later in the thread, search for "DNS Rebinding" and the related discussion on the ticket, or in your search engine of choice.
Has this user’s GitHub account potentially been compromised?
The reasoning they give makes no sense; their style of writing also doesn’t match previous commits that they’ve made. Maybe looking too deep into this, but this commit makes absolutely 0 sense and should be reverted.
That actually reads like Ryan to me, I don't think his account has been compromised.
From the commit set (e.g., [0]), it looks like he was expanding EasyList's blocking of sites that use 127.0.0.1 DNS records to carry out DNS rebinding attacks and fingerprinting, and overlooked this legitimate use case for such records.
Legitimate, that is, as long as all of the domain owners are trusted, because this does open up opportunities for conten served from those domains to punch through the same-origin policy and read back data served from 127.0.0.1. This can be a security hole, e.g., I've seen browser extensions in-the-wild which jury-rig IPC to an external helper process by opening up an HTTP API on a local port.
[0] https://github.com/easylist/easylist/commit/f11ee956a6e585d8...
Do they only block 127.0.0.1? If this truly is a security improvement I would like to see an explanation why they don't block 127.0.0.0/8 instead.
No, they blocked (now reverted) a specific list of domains which are presently known to resolve to 127.0.0.1. The actual IP address isn't used, it's a DNS block.
Annoying, but understandable. There's a reason localhost gets special treatment, as do many other local addresses. Local dev sites easily form a fingerprint that you don't want trackers to be able to use.
I'm not sure why this applies to first party browsing, though. In its current form (https://github.com/easylist/easylist/blob/master/easyprivacy...) several of these domains got the $third_party modifier which should make CORS fail, and that should resolve most of the fingerprinting risk. I'm not sure why this isn't the default to be honest.
That said, if you're developing software you should probably be running without any addons like uBlock enabled to prevent surprises in production for your non-uBlock users. Besides that, you can't get HTTPS for these domains (without the mess of a custom CA and even then you'll run into CT issues) so development doesn't even reflect real life deployments. Secure origins matter!
Lastly, you can't be sure any of these domains won't eventually resolve to a real IP address somewhere down the line, unless you own them. They're very useful but also very out of your control and that makes them a potential security risk.
The workaround should be obvious: add an entry to your hosts file (using either a TLD you own or the proper reserved TLDs (.test, .example, .localhost, .invalid, .home.arpa, and maybe .local though that can conflict with mDNS).
If you're using Chrome, you can probably use .localhost already, as it resolves those to your local domains for you. Still, adding a *.localhost to your hosts file will ensure that things actually work as intended.
> you can't get HTTPS for these domains
What are you talking about? Certificate insurance and DNS A/AAAA records are entirely decoupled. Use the ACME dns-01 challenge to get a cert for domains resolving to anything, including 127.0.0.1 or ::1. Alternatively you can even use http-01 or other challenges to get a wildcard cert, with subdomains pointing to localhost. I use Lets Encrypt certs for localhost and LAN every day.
Edit: a little more precision.
Sure you can get a certificate for those domains, but not the domains you don't own. If I can get a cert for localho.st, someone made a huge mistake.
You can set up a localhost redirect on your own domain no problem, and you can even use a local DNS server to make sure nobody can abuse your localhost redirect so your domain doesn't get filtered out by tools like these.
However, I assume someone using fbi.com because it happens to redirect to localhost doesn't own a domain (or can't be bothered to set up a redirect of their own).
For at least one of these tools / domains, local SSL is available. Details here: https://docs.lando.dev/core/v3/security.html
> if you're developing software you should probably be running without any addons like uBlock enabled to prevent surprises in production for your non-uBlock users.
It seems to me there's a higher risk that uBlock blocks something and breaks something than uBlock making something work that wouldn't for people not having it. I once had a filter block something called /share/ or share.js, fortunately I noticed during the development. I definitely prefer having it enabled while developing.
> Besides that, you can't get HTTPS for these domains (without the mess of a custom CA and even then you'll run into CT issues)
Indeed. I recently had to do this and found mkcert [1] which makes it very easy to do. But it's overkill for most situations.
> Besides that, you can't get HTTPS for these domains (without the mess of a custom CA and even then you'll run into CT issues)
Of course you can, you just cant use HTTP validation for it. Use DNS validation and it works fine.
Not if you don't control the DNS. I don't know who controls fbi.com but I sure can't get a trusted certificate for it
That was not the point you made in the original post. You said
> Besides that, you can't get HTTPS for these domains (without the mess of a custom CA and even then you'll run into CT issues) so development doesn't even reflect real life deployments. Secure origins matter!
So you can absolutely make development match deployments.
I thought about the cert thing here too.
I own a domain and internally use local.domain.com for all internal sites. Wildcard and specific names.
I can generate certs using ACME/LetsEncrypt.
So, everything, including test sites could be on that domain.
For reference, I use PiHole and OpnSense, and internally machines in DHCP and static IPs get local.mydomain.com resolution too.
You can generate valid certificates for the domains you own and make the DNS point at anything you like. It's quite a pain for a dev setup (LE certificates only lasting three months, so long enough to forget about your setup but short enough that you'll need to keep running it).
In this specific case, it's about, a bunch of generic domains set up by other people.
In your pihole example the situation would be even better because you don't need to publish A records for the domains anywhere. That means nobody can abuse your domain for fingerprinting workarounds but you still maintain complete control.
Yeah, it definitely introduces friction and setup cost.
OpnSense has an ACME plug-in to auto-renew, and can trigger jobs. In this case I have it renew and push certs to servers so they’re always renewed.
This is not the right way to block domains that resolve to local addresses even if that was something you desired to do.
If you know the IP you want to block you should just block on the IP instead of chasing down every possible address even though addresses can change at any time.
are you suggesting to block 127.0.0.1?
If you want to block requests to 127.0.0.1 DNS is the wrong level to do it
The issue might be more interesting to read: https://github.com/easylist/easylist/issues/16372
Ok, changed from https://github.com/easylist/easylist/commit/68d7a669e6cdc270.... Thanks!
Could you bring back the original title since it provides context as to what's going on?