Settings

Theme

Avoiding Internet Centralization

mnot.github.io

155 points by johndbeatty 4 years ago · 110 comments

Reader

walrus01 4 years ago

Something not really covered is this concept: "Maybe don't let one telecom company acquire too much control".

Look at the history of everything that was acquired by either Qwest/CenturyLink or Level3, and then the merger of Level3. You can't tell me that the existence of Lumen, the combined Centurylink-Level3 entity is good for anyone, except for their shareholders.

It's the very definition of too much centralization.

Look at all of the things that have now been jammed together into the modern Verizon, as well.

Look at the sad state of competition in Canada, with Rogers and Shaw trying to merge.

  • lotsofpulp 4 years ago

    Also need fat upload pipes (i.e. fiber) to individuals’ homes so they can operate their own backup and peer to peer services.

    When 90%+ of people have no upload capacity at their house, and are behind tons of CGNAT, then onedrive/iCloud/google drive become the only solution for storing your things you can access later. Same with chat protocols.

    • tkojames 4 years ago

      They just rolled out att fiber into my neighborhood. I was surprised as I live in a older neighborhood. Have overhead power lines talked with att installer and he said that was the reason we go it. Went from 1 gig down 30 mbps upload cable for 100 a month to 70 for gig full up and down and they gave me 400 bucks for signing up. It is pretty awesome having a direct fiber line to the house. My brother lives in Santa Rosa California and is on the list to get 10 gig fiber from Sonic for 65 a month. Hopefully things are starting to change. Once he has it I was thinking of just doing rysnc between us for backup.

      • pabs3 4 years ago

        What equipment does the fiber line plug into? Can you run your own device? Is it PON or AON fiber?

        • judge2020 4 years ago

          AT&T's residential offering requires you use their terminal[0] which is apparently a G-010G-A[1], so PON. They also require you use their own gateway+wifi combo unit[2], although you can do full IP passthrough so that the gateway doesn't even do ipv4 NAT. I have an installation tomorrow and will reply in case the box is a different model.

          0: https://www.att.com/support/article/u-verse-high-speed-inter...

          1: https://www.ebay.com/itm/164939095713

          2: https://forums.att.com/conversations/att-internet-equipment/...

        • OldTimeCoffee 4 years ago

          ATT Fiber is mostly GPON with some areas that are XGS-PON. They just started rolling out 2gbit and 5gbit service in one city.

          There are a couple of bypass methods on the GPON network that let you use your equipment with the ONT and bypass the RG. I don't think I would bother, though, and I'd just use the 'IP Passthrough' DMZ to assign the public IP to your router. The only problem was the size of the internal connection table on the RG was historically very small and didn't clear quickly which really only causes problems if you torrent. The newer RGs don't have this problem.

          On the XGS-PON network the ONT is integrated into the RG so it's not possible right now, but again the DMZ 'IP Passthrough' mode actually works well.

          They support IPv6 and they give out a /56 if that matters to you.

    • tzs 4 years ago

      I don't understand. If my upload bandwidth is sufficient for me to keep my stuff on OneDrive/iCloud/Google Drive, why wouldn't it be sufficient to keep my stuff on any other remote storage service?

      • lotsofpulp 4 years ago

        You can keep it on other services, but economies of scale, bundle pricing, and security concerns benefit the large incumbents. For example, I stick to iCloud, because all of my data is already exposed to Apple, so exposing it to another entity is just increasing the number of entities my data resides at just increases my vulnerability.

        However, as an individual, I could gain great utility by not exposing any of my data to any company. And I would not have to if I could setup a NAS at home with a 1Gbps+ upload that me and my family can setup our devices to backup to, or use in a similar fashion as Dropbox or run a peer to peer chat protocol like WhatsApp.

        But all of that is a nonstarter because of the minuscule number of people with 1Gbps+ connections at home with ipv6 and not hidden behind CGNAT, there is no viable market for selling the software and NAS that can cut out the big tech companies.

  • FridayoLeary 4 years ago

    Is it? In the UK BT owns all the wires and stuff yet i don't think it majorly affects consumers, nor are their competitiors being crushed.

    • ssss11 4 years ago

      There are a couple of others who own wires but really only in London (colt, forget the other one)

      BT retail (the arm that sells to clients) has strict rules that forbid it from sharing with BT wholesale (the arm that sells to the other ISPs) so BT retail really can’t crush the competitors.. I don’t know the exact arrangement but they’re treated like any other ISP customer I believe

    • gerdesj 4 years ago

      See if you can find our ENUM registry and use it.

      BT doesn't own our wires as such. OpenReach does (yes they were formally BT but spinned off).

      BT or OpenReach - who cares? The important thing is functionality. I'd like to provide you with a novel telephony setup but the lack of ENUM means I am not able to do that.

      • M2Ys4U 4 years ago

        >BT doesn't own our wires as such. OpenReach does (yes they were formally BT but spinned off).

        Openreach are a wholly-owned subsidiary of BT, they're not really independent.

    • cortesoft 4 years ago

      Depends on how heavily regulated it is

  • serverholic 4 years ago

    Starlink should help with this by bypassing traditional networks.

    • melony 4 years ago

      They are the definition of centralized.

      • serverholic 4 years ago

        Two networks are less centralized than one. Why do people think centralization is binary?

        • justinpombrio 4 years ago

          And Starlink is especially good, because it can compete with all of the cable companies at once. No one will have a monopoly anymore. Hopefully a lot of people will suddenly start getting better service, which I hear is common when competition rolls into town.

          • tzs 4 years ago

            Starlink will not have enough bandwidth, even after the full constellation is deployed, to compete with even one cable company in a major metropolitan area. Starlink is designed for places where there is not a high customer density.

            • serverholic 4 years ago

              Do you have any resources to learn more about this?

              • tzs 4 years ago

                Just random Googling. Google for things like "Starlink aggregate bandwidth" or "Starlink satellite bandwidth". Here's one article looking at this [1].

                Each satellite is visible under idea circumstances from about 3% of the surface of the Earth at once. At an eventual 12000 satellites, that would mean 360 satellites would on average be visible from a given spot. The orbits are such that there will be much less coverage in the far north or far south and much more in between, so let's call it 1000 satellites on average visible in a given area that they will serve.

                The next generation satellites are supposed to have 80 Gbps bandwidth to the ground. That's enough for 800 users simultaneously downloading. But most users most of the time are not going all out.

                Let's try to estimate what average use is. Most Xfinity users have no trouble staying under their 1.2 TB/month cap. I've read somewhere that the average usage is only about 1/4 that. That would correspond to an average usage of around 1 Mbps, 1/100 of the speed of a Starlink connection. Using that, we can handle 80000 users per satellite, or 80000000 users for the 1000 satellites visible on average.

                Taking into account the ground area in which a satellite is visible, we can work out the maximum ground density of customers at that usage level that stays within the bandwidth total. It works out to about 11 users/square mile (4.4 users/square kilometer).

                In summary, Starlink should be able to deliver a decent 100 Mbps 100x oversold internet service in places where over the area visible from a satellite (about 7000000 square miles or 18000000 square kilometers) the customer density is under 11/square mile or 4.4/square kilometer.

                The US population density is 94/square mile (36/square kilometer). California is 2.5x that, Florida and New York around 4x that.

                The density it can support is probably actually a little lower than I got because I'm assuming that any satellite visible from your location can be used. My understanding is that the antenna is aimed in a good general direction at setup, and then uses a phased array to track satellites as they come and go. I'd expect that this limits you to satellites that are in front and not too far to the side of where the dish is pointed.

                [1] https://www.techdirt.com/articles/20200928/09175145397/repor...

          • immibis 4 years ago

            Unless Starlink becomes the new monopoly, which is worse because it's one global monopoly instead of a different monopoly per region.

    • hansel_der 4 years ago

      why?

      wireless is only radically different from a consumer perspective and the capacity is nothing to write home about. It's the coverage and latency that bring value.

bullen 4 years ago

They should probably add that merely adding yet another protocol centralizes things.

Implementing them takes time and you need many implementations for the protocols themselves to become de-centralized.

This is what breaks most new protocols and languages combined with diminishing returns (low hanging fruit has allready been harvested).

Personally I'm going back to HTTP, DNS and SMTP.

And even if DNS is completely centralized, it's the only thing we have for name lookups after 38 years!

Also I never rely on DNS if I can avoid it (I use static IPs and only use the hostname for virtual hosting / load balancing).

And de-centralization by hosting is more important than the protocol itself being p2p, since no p2p protocol can operate purely without server because of discovery.

I have made my own, from scratch, implementation of all 3:

DNS and SMTP soon coming to a rupy (HTTP), enabling DNS and SMTP through HTTP, you'll basically be able to control DNS and SMTP via a "Servlet/Filter".

Home hosting on fiber with static IP and ports 80, 53 and 25 open is the real challenge.

Making sure your ISP enables those has way higher priority than this document!

And the real canary is when you don't get an external IP on your fiber when IPv4 allocations in Africa run out.

It's time to wake up if we want an internet that does not become rent seeking.

Google charges for static IP addresses on GCP which should not be a thing if they get allocations for free.

IPv4 is a scarce asset, so they have an incentive to slow down IPv6!

  • tecleandor 4 years ago

    > Google charges for static IP addresses on GCP which should not be a thing if they get allocations for free.

    Same as in AWS (IIRC) in Google Cloud you don't get billed for static IP addresses if they are in use:

    "If you reserve a static external IP address and do not assign it to a resource such as a VM instance or a forwarding rule, you are charged at a higher rate than for static and ephemeral external IP addresses that are in use.

    You are not charged for static external IP addresses that are assigned to forwarding rules."

    https://cloud.google.com/vpc/network-pricing

    • bullen 4 years ago

      That's not true, I have 3 static IPs connected to VMs and they charge me for them:

      External IP Charge on a Standard VM Usage 2021-11-01 2021-11-30 2,XXX.XXX hour 50.XXXXXSEK

      So ~$2 per IP in USE per month!

      >gcloud compute addresses list

      ERROR: (gcloud.compute.addresses.list) Some requests did not succeed:

      - Request had insufficient authentication scopes.

      We need to move away from these companies before it's too late... they are incompetent and rent seeking.

      They also do not rebate your free instance if you shut it off and change the instance type of the instance that had the rebate, even if you change it back.

      And there is no recourse, no support, no way to get help.

      • bullen 4 years ago

        I just had to authenticate apparently:

          NAME  REGION        ADDRESS          STATUS
          euro  europe-west1  X.X.X.X          IN_USE
          asia  asia-east1    X.X.X.X          IN_USE
          iowa  us-central1   X.X.X.X          IN_USE
        • tecleandor 4 years ago

          OK, now I see. I got certified a couple years ago and seem like they introduced this pricing past year.

          So "...you are not charged for static external IP addresses that are assigned to forwarding rules", yes, but a VM is not a forwarding rule :P

          So you gotta pay for external IPs attached to a VM, but not to a Load Balancer. Seems like they're trying to incentivize using LBs to use less IP addresses, I guess...

  • yjftsjthsd-h 4 years ago

    > And even if DNS is completely centralized, it's the only thing we have for name lookups after 38 years!

    Depending on how you mean, it's not the only thing or its not 100% centralized; https://en.wikipedia.org/wiki/Alternative_DNS_root lists the major alternatives in that immediate space.

superkuh 4 years ago

> 5.2. Encrypt, Always: When deployed at scale, encryption can be an effective technique to reduce many inherited centralization risks. ...

The problem here is the word "Always". Encryption is good for just the reasons they say. But only encryption, always encryption, not having an option for plain text is highly centralizing in itself. This is because the current status quo for encryption is to use TLS based on certificate authorities. And CAs are always highly centralized and highly centralizing.

If Lets Encrypt ever goes corrupt like dot Org did it would cause an incredible amount of trouble and that entity would have power over a large portion of the web, if not the entire internet. There's an easy solution to this though. Don't throw alway plain protocls. Plain and TLS wrapped are synergistic. Use both. There's no need for, and it is damaging, to always encrypt without an option for plain text.

A hypothetical downgrade attack is not an excuse for using only highly centralized TLS CA based protocols in this context.

  • AnthonyMouse 4 years ago

    > This is because the current status quo for encryption is to use TLS based on certificate authorities.

    Not everything has to be TLS or even HTTP. Look at messaging apps. Signal is encrypted, but the end-to-end encryption it uses isn't TLS and doesn't use certificate authorities.

    > If Lets Encrypt ever goes corrupt like dot Org did it would cause an incredible amount of trouble and that entity would have power over a large portion of the web, if not the entire internet.

    Not really. Let's Encrypt doesn't have a monopoly over anything. They use an open protocol (ACME) that any other CA could implement. If they went evil, someone else would implement the same protocol and everybody would switch to them. Which also implies that they won't, because why bother if that's what will happen?

    This is kind of a problem with the CA system the other way -- if you have one bad CA they can sign any domain even if they shouldn't -- but in this case it prevents what you're worried about.

  • foxfluff 4 years ago

    LE also forces you to rely on DNS, which is highly centralized..

    • midasuni 4 years ago

      How is DNS centralised?

      • judge2020 4 years ago

        All roads lead to . [0], ie. IANA. Many IANA-approved entities run them[1], but they all only resolve TLDs ICANN authorizes (and those TLD operators control what domains are registered under their TLD, of course).

        0: https://dns.google/query?name=.&rr_type=NS&ecs=

        1: https://www.iana.org/domains/root/servers

      • im3w1l 4 years ago

        Well on a technical level there are root servers. But DNS is a hierarchy and so if the root servers ever tried to pull a fast one there are second-in-command authorities that could take over: the cctld orgs. People would rather follow their lead than ICANN, so they have the real power. I'm pretty sure this is by design.

    • xxpor 4 years ago

      What CA doesn't?

      • hackmiester 4 years ago

        I don't think it was meant as a criticism, just a statement of the current status quo, which is inherently rooted in the centralized DNS.

      • foxfluff 4 years ago

        There are many CAs that give certs for IPs. LE won't.

        Not that it's much better. IPs are still granted to you by someone in a centralized hierarchy.

  • gefhfff 4 years ago

    Encryption does not imply authentication, does it?

    • superkuh 4 years ago

      Browsers scaremonger really hard about self-signed SSL certs. And browsers are starting to implement HTTPS only as a default. It won't be too long before HTTP is blocked by mega-corp browsers and not having a CA TLS cert means your website is now un-visitable by non-technical people (and not indexed by search engines).

      • midasuni 4 years ago

        The concern about http over https is that a bad actor can intercept and change traffic.

        If you allow self signed certificates, anyone who can MITM traffic can masquerade your site just like with http

        Self signed does however stop passive fibre taps - to intercept you need to MITM.

        There then the “remember this cert” option. If I visit www.selfsigned.com on a secure network, my browser remembers the certificate. If I then travel to another network with a MITM, my browser can flag up a warning. This is how SSH works.

        However I’m not too concerned by SSL certificates as a centralised point - my browser trusts dozens, probably more than 100, root certificates. That’s not centralisation.

        • immibis 4 years ago

          Self-signed certs should be no scarier than unencrypted connections. If self-signed certs are allowed then you can have a case for banning unencrypted connections - the way Mozilla tried to do in the past, but they didn't allow self-signed certs.

          If we're not going to show interstitial warning pages for HTTP-not-S sites, so you can't see if it's HTTPS without checking the address bar, then a red open padlock and a red strike through the "https" seems sufficient for self-signed HTTPS sites. Some indication is needed, otherwise you'd see the "https" and think it was secure, but the indication shouldn't be scarier than HTTP-not-S!

        • mistrial9 4 years ago

          this seems to be a well-known list of trusted Certificate Authorities

          https://ccadb-public.secure.force.com/mozilla/CAInformationR...

betterunix2 4 years ago

"Internet routing requires addresses to be allocated uniquely, but if the addressing function were captured by a single government or company"

Technically it is so captured -- IANA is the root of the hierarchy that distributes both IP address assignments and ASN assignments -- and the RIRs are effectively centralized authorities in their regions. Thus far it has not been a problem.

With a larger address space and longer ASNs you could decentralize the entire process. Basically, subnets and ASNs would be hashes of public keys, and you would use a path-vector protocol where the NLRIs contain NIZKs proving knowledge of the secret keys and asserting who the NLRI was sent to at each hop (identified by ASN). It is not current being considered because (1) it would greatly increase the cost of routers and related infrastructure and (2) thus far there is no immediate need.

rapnie 4 years ago

In "The Promise and Paradox of Decentralization" [0] I found this quote to be very appealing wrt decentralization:

> "[A]ny decentralized order requires a centralized substrate, and the more decentralized the approach is the more important it is that you can count on the underlying system."

This somewhat counterintuitive notion is often overlooked. In order to facilitate a healthy decentralization effort you need a heck of a collaborative movement to make it a reality and sustain the initiative.

[0] https://www.thediff.co/p/the-promise-and-paradox-of-decentra...

fouc 4 years ago

Avoiding Internet Centralization means.. avoiding browser centralization.. means avoiding the dominant browser stack (currently chrome-based).. means avoiding browser auto-updates.. means discouraging user agent strings & browser/os version fingerprinting.. means avoiding cloudflare..

DarthNebo 4 years ago

Decentralisation efforts at this point is very akin to the democratic movements from centuries ago. Governments being reluctant to entertain efforts of taking back control, most recent example being StarLink asked to stop selling in India since they aren't registered as an ISP. The whole point of satellite internet is to avoid geo-control by any government body or local ISP.

  • tjohns 4 years ago

    The whole point of satellite Internet is to fill in connectivity gaps where you can’t otherwise run high-speed fiber or wireless towers.

    It does not avoid government control. Even ignoring local legalities, at the very least it will be under the physical control of whichever country hosts the nearest ground-segment station.

    • immibis 4 years ago

      There can be multiple ground stations.

    • DarthNebo 4 years ago

      I get the connectivity point but surely you would also need unfettered internet access if you live in countries which actively censor/cripple it like Russia, China, NK etc.

    • quinnjh 4 years ago

      And arguably exists to maintain said geocontrol

fiatjaf 4 years ago

If you read this you might be interested in this protocol which started as an idea for truly censorship-resistance Twitter but it's evolving into a generic suite of many subprotocols that involve interaction between users.

https://github.com/fiatjaf/nostr

Kinda like the "fediverse", but improved in the sense that it is not federated, but also not P2P (because pure p2p doesn't scale).

user_named 4 years ago

I think centralization is just a property of reality. Everything is centralized. Crypto is centralized in certain ways (mining capacity, holdings), capital is centralized to the top 0.1% in every country and so on.

It is better to design systems to handle centralization than with the assumption that they will remain decentralized, which would sort of break them.

  • __MatrixMan__ 4 years ago

    I disagree. As just one counterexample, consider the mycorrhizal fungal networks that mediate access to nutrients among trees--they've been doing their job without centralized intervention for billions of years.

    If it seems like everything that we build is centralized, it might just be that we're bad at building things that last.

    • meheleventyone 4 years ago

      Like the Internet? I actually think we could be good at building these sorts of things if we let go of the profit motive for doing so. History shows that we have been.

    • quinnjh 4 years ago

      Or bad at building things that decompose...

    • user_named 4 years ago

      The tree is the center.

      • __MatrixMan__ 4 years ago

        These networks handle the exchange of resources (nitrogen, phosphorous, etc) between trees. So far as I know, there's no reason to believe that there's a "leader tree" or anything like that.

  • meheleventyone 4 years ago

    This is something touched on by The Tyranny of Structurelessness and further confirmed by the way companies with a flat internal hierarchy operate. Where there is decentralisation there is implicit power relationships and in the terms of the root essay here platform and indirect centralisation. It’s the why of democracy in anarchist organisation.

walrus01 4 years ago

1970: we're going to build an unbreakable worldwide network to survive a nuclear war

2021: AWS and amazon US-EAST-1 is down, this means my coffee maker doesn't work

  • betterunix2 4 years ago

    Uh...

    1970: Early networks suffered from congestive collapse problems, routing protocols were slow to converge, computed suboptimal routes, and had count-to-infinity problems, only a handful of transit networks existed, domain names were managed by one dude broadcasting a file to everyone, little to no security infrastructure, etc.

    2021: We have robust congestion control and queue management, scalable routing protocols that find optimal routes and have no count-to-infinity problems, DNS, large numbers of transit networks with a high level of redundancy, and at least some security infrastructure in key places (DNSSEC, RPKI, etc.).

    Don't confuse web infrastructure and hosting services with the Internet itself, which is the network and which has never been more distributed or more robust than it is today.

    • cj 4 years ago

      I see your point, but do all those network level improvements mean much to the end user when so many critical services live in datacenters owned by 1 company? Many of which in a single region on the east coast, that goes dark at least a couple times a year for hours on end?

      • betterunix2 4 years ago

        Some perspective: in 1986 the Internet experienced a congestive collapse that reduced the useful throughput by 3 orders of magnitude. End users noticed, and if it happened today there would be pandemonium.

        The Internet has become so robust and reliable that end users take the network for granted. At this point most of the headline-making incidents involve services running on the Internet, not the Internet itself (at least in the US, EU, and other highly-connected regions/countries; there are still countries that can be taken offline by just one cable being severed, and their Internet users cannot take the network for granted yet). In my adult life there have only been a handful of significant, global outages/reductions in service quality on the Internet itself. Regional outages happen from time to time, though few are significant enough to make national or international news.

        So yes, network-level improvements mean a lot to end users -- they mean that end users can rely on the network itself, and only have to worry about problems at higher levels of the stack.

        • catlikesshrimp 4 years ago

          I love the internet as it is today (facebook and google aside) the system is reliable enough for us. But the discussion is about centralization. This is not about infrastructure.

          The day facebook was down for a few hours, I was asked why the internet was down. That person uses the internet for fb and whatsapp (also fb) A decentralized communication protocol and a federated social network wouldn't fail completely under the same circumstances.

          • Gigachad 4 years ago

            I think that's more a statement about how reliable the current system is where users think their network is down rather than facebook which would be unthinkable.

          • meltedcapacitor 4 years ago

            Facebook could also run federated technically, so that it never fails all at once. It failed because they put all the eggs in one basket in some top layer.

            Technical decentralization is orthogonal to commercial issues.

            • immibis 4 years ago

              If it was federated it wouldn't be Facebook, it would be Mastodon.

              If it was federated and Facebook still controlled it, it would still be centralized.

      • edoceo 4 years ago

        Is it really that frequent? Or does it just feel huge cause it owns so much?

    • walrus01 4 years ago

      It wasn't supposed to be serious, but for anyone that's ever seen a catastrophic level3 failure as a peer or large customer of AS3356... It's less resilient than you think. There are way too many eggs in one basket in some places.

      • betterunix2 4 years ago

        Global outages or reductions in service quality are very rare these days. Regional outages happen from time to time but are not very common. Local outages are frequent, but irrelevant -- the Internet is meant to be resilient to local outages, which presupposes that local outages are a common concern. Obviously if you shrink your scope enough you will be able to say problems happen regularly -- considering only the connectivity in my home there are many incidents each year.

        The most significant incident in recent memory involved a severed fiber optic line in New York City earlier this year, which affected the US Northeast in various ways. The impact was relatively short-lived, and despite living and working in NYC I was not personally affected at all -- even though I use Verizon FIOS and the line in question was operated by Verizon (a testament to how resilient Verizon's own network is). That is the mark of an extremely robust system -- a major, overly-centralized component (one cable carrying many supposedly redundant links) is destroyed and the effects remain highly localized and the impact is not universal even within the local area.

  • acdha 4 years ago
  • Karrot_Kream 4 years ago

    Unfortunately I don't think most users care. Give someone the choice between a cheap service with two nines (or even one) of reliability and a more expensive one with four, and the vast majority of people will go with the former. I mean, most ISPs in the US offer business connections with real SLAs but almost nobody is willing to pay that much for guaranteed bandwidth and uptime.

gefhfff 4 years ago

A recent example is "Message Layer Security".

While Wire and Matrix are working on a decentralized version the IETF is, unfortunately, working towards one based on a central entity.

Source:

https://news.ycombinator.com/item?id=25102916

https://matrix.org/blog/2021/06/25/this-week-in-matrix-2021-...

  • Arathorn 4 years ago

    As far as we know, the Wire version is still logically centralised, using a centralised sequencing server.

    On the Matrix side we’re working on fully decentralising it (as per https://matrix.uhoreg.ca/mls/ordering.html). There’s also a cool similar project from Matthew Weidner: https://dl.acm.org/doi/10.1145/3460120.3484542

    It’s a bit perplexing that mnot’s draft cites XMPP as decentralised, given MUCs are very much centralised to a single provider which entirely controls that conversation, and if that provider goes down the conversation is dead. But I guess that’s because XMPP is submitted to the IETF, and Matrix isn’t yet.

    • zaik 4 years ago

      XMPP is still a decentralized protocol by design. That you can't send messages to a conference hosted on a server that is offline doesn't make it 'centralized'.

      • Arathorn 4 years ago

        There are probably five broad levels of decentralisation here:

        1. Open network, but each user lives on a single server, each conversation is dependent on a single server: (XMPP MUCs)

        2. Open network, but each user lives on a single server (with some ability to manually migrate between servers), conversations are replicated across all participating servers: (Matrix, ActivityPub, SMTP, NNTP)

        3. Open network, users are replicated across multiple servers, conversations are replicated across all participating servers: (Matrix + MSC1228 or MSC2787 or similar)

        4. Open network, users live on a single P2P node, conversations are replicated across all participating nodes: (Briar, today's P2P Matrix)

        5. Open network, users are replicated across multiple P2P nodes, conversations are replicated across all participating nodes: (P2P Matrix + MSC2787 etc).

        So yes, XMPP is decentralised by some definition, but it's kinda useful to map out the whole space.

  • rvz 4 years ago

    Interesting to see Wire, and Matrix making an effort in this. Unlike Signal which still requires your phone number and is completely centralized to their servers whist promoting their 85% pre-mined cryptocurrency that they can dump at any time.

nathias 4 years ago

> Some protocols require the introduction of centralization risk that is unavoidable by nature. For example, when there is a need a single, globally coordinated 'source of truth', that facility is by nature centralized.

No, there is nothing unavoidable in making a centralized DNS system.

  • AnthonyMouse 4 years ago

    Right. Even putting aside blockchain-based systems, you can have systems without a dictator because they're based on voting.

    Suppose the root is a set of public keys, each with a top level domain. Adding one requires a supermajority of the others to agree. Removing one is impossible; it can sign its own successor and that's it. You now have a federated system with no single chokepoint.

    • mindslight 4 years ago

      I'd say blockchain naming and the system you describe are still both centralized. The authorities are distributed, but they're still collectively responsible for deciding on a single coherent root.

      Compare with systems that don't revolve around making any coherent global view, like Petnames. In the context of Zooko's triangle - do the human readable name lookup once as part of a manual process, and then persist the relationship as decentralized/secure but not human-readable.

    • notriddle 4 years ago

      > because they're based on voting

      All voting systems require protection against Sybil attacks. The best methods to protect against Sybil attacks are centralized. The not-best methods use proof of work, which has extreme downsides and only makes Sybil attacks expensive, not impossible.

      • AnthonyMouse 4 years ago

        > All voting systems require protection against Sybil attacks.

        The voting system is the protection against Sybil attacks. A successful Sybil attack would allow you to add your own TLDs, but if you can't already do that then you can't do a Sybil attack.

        The real issue with that kind of system is deciding on the initial group of voters.

        • notriddle 4 years ago

          How do the existing group of voters prove to themselves that the to-be-added voter is actually a unique person, and not a sockpuppet? If it’s something like a government ID, then they’ve outsourced the Sybil protection, and not actually gotten rid of it.

        • immibis 4 years ago

          What stops me adding 10000000000 fake users and having them all vote for my new TLD?

      • guerrilla 4 years ago

        Sybil attacks aren't possible in what was just described though since it requires a vote to create a new voter.

  • acdha 4 years ago

    I think that's a question of whether you're thinking of “unavoidable” in the technical or social context. You could design a system where anyone can run their own root but would you want to operate a business in a world where your advertisements need to list which of the DNS roots you use and spend time paying to register with everything which becomes popular enough that spammers would consider registering your names? That seems even worse than the proliferation of top-level domains since there at least the name which people see is actually different than the one you advertise.

  • ggm 4 years ago

    You convert a single point of truth to a set of the point of truth and a voting system. The current cryptographically signed model uses a single signing authority. I suppose you could argue for multiple independent signing but it begs the question how you arrived at what was to be signed. That's a process to a unitary decision.

  • perryizgr8 4 years ago

    In fact, isn't torrent magnet links exactly this kind of a system? You can locate files on thousands of peer systems, without using a centralized tracker.

  • preseinger 4 years ago

    What? Practical addressability requires centralization by definition. How could it work otherwise?

    • sweetbitter 4 years ago

      If we go for human-readable and decentralized with reasonable security, we can look towards 'petname' systems, like the 'Address Book' software popular on the I2P network. Petnames are the subjective binding of public keys to shortened names, where a user resolves domain names to keys according to a combination of their own personal list (which can be easily expanded with specially-formatted 'address helper links') and the lists of third parties which they have chosen to trust (or which they have been bootstrapped into, thus trusting the bootstrapper by proxy). No need for environmentally unfriendly proof of work schemes or what-have-you. This falls on the 'less secure' point of Zooko's Triangle in the sense that, although it is very practical, decentralized, and usually avoids conflict, it does have trust-based security flaws if those providing their lists ('resolvers' if you will) manage to swap out keys for a petname without anyone noticing.

    • wmf 4 years ago

      See https://en.wikipedia.org/wiki/Zooko%27s_triangle for an exploration of the tradeoffs.

      • preseinger 4 years ago

        From the PetNames site,

        > Each name set consists of three elements: a key that is global and securely unique (but not necessarily memorable); a nickname that is global and memorable (but not at all unique), and a petname that is securely unique and memorable (but private, not global)

        An essential component of a practical addressing system is names which are global, memorable, _and_ unique. Colloquially, I need to be able to say "Google dot com" over the phone to someone, and their experience using that name needs to be identical to mine. A system that doesn't provide this property doesn't solve the problem.

    • goodpoint 4 years ago

      ...for an arbitrary definition of "practical".

      There isn't a single world-wide authority that assigns names to people or companies, or plate numbers to cars or airplanes. It's partially federated and we accept the tradeoffs.

      • immibis 4 years ago

        Humans can use ad-hoc disambiguation where needed - "it's the A1 Computers down by the lake" - but computers can't handle one name resolving to two different sites. When a1computers.com is registered twice, it won't automaticallychange one to a1computersdownbythelake.com and the other to a1computersbytheairport.com.

        • goodpoint 4 years ago

          > computers can't handle one name resolving to two different sites

          There's plenty of ways do to so: by configuration, by quorum, by bookmarking pet names on first use, or by asking the use (what every search engine does when people look up e.g. "netflix")

          • preseinger 4 years ago

            I'm all for embracing the invariants of eventual consistency, but an internet addressing system without an authoritative (i.e. centralized) truth is practically useless.

      • preseinger 4 years ago

        We're talking about name resolution on the internet. Some source of authority is required to keep the overall system sound. Said another way, if two people query the same name and get categorically different answers, or reliably non-deterministic answers, the system is unsound.

qnsi 4 years ago

isnt it ironic it was posted on github?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection