Show HN: This website is hosted on DNS
banner.triweb.devHey HN!
I'm excited to share something I've been working on: a way to set up and launch websites directly through the domain control panel. It allows anyone with a domain name to create, publish, and edit basic websites directly from the domain control panel without any traditional hosting providers or coding knowledge. This narrows the gap for non-technical people looking to publish simple personal and small business websites.
This is also the first TWA (triweb application) on the triweb platform (https://triweb.com) we are working on. Triweb currently has limited functionality, and this app is mostly just a showcase of how TWAs and triweb containers work. We have an exciting lineup of upcoming features and a unique, simple vision of the decentralized web without the overhyped web3 technologies. We hope that one day triweb will become a standard platform for local-first, browser-based decentralized web applications. About 16 years ago, I did a different experiment of hosting a website in DNS. I put small web pages into DNS TXT RRs. Thus, one only needed to make a DNS request over UDP to retrieve the web page, instead of a DNS request and then an HTTP request over TCP. tinydns allows one to put any data into RRs, including control characters. RRs can therefore contain MIME headers. Then I put dnscache in front of it. The result was a highly resilient website with compact pages. I don't understand how this is "decentralized", the first step is "Point your domain at a triweb relay server". Doesn't this make the triweb relay server the central authority? At the moment, it does, but we plan to release a Docker image of the relay server so that anyone would be able to spin his/her own relay or use any other existing public relay. The triweb platform is still a work-in-progress, and there is a lot left to be done, released, and documented. The Banner app is mostly just an early demo of how TWAs are built and how they may be deployed to domains, that I thought may be interesting to HN. Pretty cool, even if it's a bit of a toy. I'm no networking expert, but "without the need for web servers" isn't really correct because the DNS is a type of server, right? Using DNS for this is a very bad idea. The format to convert those strings to html may be good, the storage (DNS) is the bad part, especially that you can host such small files for free usually. Using DNS for website content storage is indeed unconventional and somehow controversial, but I wouldn't say it is a bad idea per se. TXT records were designed to store arbitrary data (up to 4.000 bytes per single record) and the app is meant to be used for small websites with a few paragraphs of text, as managing anything bigger than through a domain control panel is rather painful. I should elaborate more about why I believe this is a bad idea to use DNS as storage / content. First, because of a distributed nature of the DNS, clients do not directly fetch this data. Instead, resolvers and DNS forwarders do the work and cache it. Secondly, because displaying or converting this data to html requires a special website or a dedicated client, at which point this dedicated component should use a different storage than DNS. Whatever that may be, p2p, etc. Third, as the impact on the DNS ecosystem is problematic here. I will say something controversial by stating this: DNS had been designed in one goal, for you or your program to connect "to a string" that you or a string owner control and manage, as opose to connecting to a network address (IP) that you usually do not own, that can change at any time, and that it is meaningless and hard to remember. Over the years, this basic principle extend a bit with introduction of some policies being stored in the DNS that is meant to help the domain owner setting up some boundaries or policies, like SPF or CAA records. Storing website data is not something I would recommend. Again, you can argue that this "client" only prints TXT data to users that would not normally be able to access it (average internet browser user) but clearly it goes far beyond this, encouraging storing domain content (data) into a DNS. Fourth, security. Very few domains implements dnssec or curvedns and clients do not use DoH en masse. A mitm is a serious threat to the client rendering this data. Fifth, if this gains wide adoption (doubt it), resolvers will stop caching TXT records as a policy to save resources for crucial data, and the crucial data is A(dress) record(s). Because the purpose of the DNS on a high level view, is too make a connection between two programs or users. I know that's a simplification, but this is the core idea. Sure, it has reverse DNS mapings to assing a string onto a network address, but 99.99% of the time DNS job is to make connect() possible. So you connect() to Amazon using DNS, Amazon contacts payment processor using DNS, they connect() to Visa which connect() to your Bank and every connection in this payment processing chain use DNS to maka it possible and reliable. Let me just add one more thing why this idea seems bad to me personally. I'm not atacking you, but the idea. So the way things work today is this, the way I see it: 1. We have a physical layer -> a wire 2. We have a data link layer -> ethernet + ARP or NDP are common here 3. we have a network layer -> currently, two networks are widely used: IPv4 and IPv6 4. We have a transport layer: tcp and udp and quic 7. We have an actual data, encrypted or not. So where the DNS fits in this model? People say it is build on top of UDP, so it must be layer7, right? But because of the core function of the DNS protocol, I would put DNS into Layer3, the same as I would put ARP and NDP on layer2 even if NDP is built into IPv6. I would put it on Layer 2 solely by the function it provides. A sole purpose of the DNS is to make a connect() possible without using direct network addressess, but client use "a string" as a destination. For me this is a Layer 3 protocol, because it is a helper protocol used to establish a connection. Once you have a connection, on top of it you use other layers to transfer data reliably. Your solution puts a Layer 7 data into a Layer 3 component by using TXT records. And if there were no TXT records invented, you could still encode binary data in the AAAA records, 8 bytes at a time and hack it to store arbitrary data that a custom "client" can process in 8 byte chunks. And while clever, this is not the right tool to store layer 7 data. It is like sitting on an island with a hammer. You need to cut a tree and says: well, I can use hammer to cut the tree or I can try with bare hands. What should I do? While I agree hammer has a higher chance of succeeding, I would advice to find a sharp rock instead. Find or invent a better tool for the job. Thank you for your insights. I understand the concerns you've raised, but there's also another side to DNS that's often overlooked: its role as a globally available, distributed database for storing small amounts of data linked to domain names. Now, I'm not saying that people should start to host their files on DNS, but for simple, personal "banner" like pages that say "Hi, I'm X, love cats, catch me on @email" that should be OK, may be easier than setting up a separate hosting, and make the web more inclusive for non-technical users. Storing and transferring a few hundred bytes for this purpose under a distinct namespace/zone in a niche use case should be well within the capacity of the existing DNS software and infrastructure. We already have other standardized, dedicated classes of informational ("Layer 7 data") DNS records like HINFO and RP, so why not store a piece of text that would have a meaning for humans and present it in a nicely formatted (themes) and accessible (HTTP) way. The way I see it is as a scale: DNS TXT records may be ok to use for small, simple, informational pages; - static hosting (GitHub Pages, Cloudflare Pages, etc.) should be used for longer texts, pages with custom assets (e.g., images), or multi-page sites; - SSGs and dedicated platforms for blog and bigger websites (Jekyll, Wordpress, etc.); - VPSes, EC2, Lambda, Firebase etc. for bigger things; - ending probably with dedicated servers, networking, and own ASNs for big online platforms. There is always some amount of data or activity above which one should switch to the next level, but switching to it requires additional knowledge and resources; and the other way around, buying and maintaining a dedicated server just to host a simple, static website with low traffic is an overkill. Also, one thing about the app's architecture may be worth highlighting: The app uses a pre-set DoH provider (currently Cloudflare) to resolve DNS queries on the client-side, so the impact on the DNS ecosystem is close to none as only the source DNS server and the DoH resolver store and transfer these TXT records. Such setup also protects sites against mitm attacks. If the traffic becomes too big for Cloudflare infrastructure (if that's even possible), we can always setup and switch to our own dedicated DoH endpoint. Similarly, if the DNS hosting provider thinks it's abusing their servers, they can increase pricing or limit the number and/or length of TXT records their customers can setup for a single domain. Every bit of information that is stored in the DNS is serving one basic purpose: to allow connection to a string. Additions, like CAA or SPF are also consumed by programs to set some limits of a domain usage. Every other record you mentioned never gained any adoption, simply because this "database" is meant to be used by programs, not humans. Sure, I may use SOA email to contact a DC, but this is exception, not a rule. Therefore, from the perspective of a person that does DNS courses for technical staff, I conclude that average Joe will never be able to set TXT records properly, and professionals will not do it either because of the reasons I mentioned. And while I encourage you to pursue your goal, adoption willy be close to none so I'm not so worried about that. Adoption of this will be much lover that adoption of HWINFO or LOC records (ignore RFC 8482, I mean user provided HWINFO data). Your proposal will be used by 50 people total, optimistically. For all use cases, we have http invented already. Storage for http is also a solved problem. Thanks that is a cool insight about what layer DNS should be thought of as. Never even thought about this. I really do not want to start a flame war about OSI model, but I tent to bend the OSI model and interpret the "layers" by the function they presents. Another example, lets say you spin up a QuicTun, nacl-based encapsulation tunel, encrypted, running over IP + UDP as a layer7 process, in userland, there is no kernel implementation. For me, this vpn is Layer2 (when tap mode) or Layer3 (when tun mode) regardless that it uses all 7 layers to establish the tunnel. Sure it uses all layers to establish the connection but once it is done, I have layer3 device for further use. Same for DNS - for me it is Layer3 because of the crucial function it provides, so I could rather say it is functionally on the same layer as IP in a sense, it is crucial to make a connection. Does not matter that it uses all 7 layers to get me the network (A)ddress / (AAAA)ddress, it is still functionally a network layer for me. Only once this is establised, I can exchange data. And while DNS is running on layer7 - for me personally, the function it performs is "to allow connect() to a string as opposed to a network address (IP/IPv6)" and because of that, it is a Layer3 for me on the functional level, same as a VPN software to access some corporate stuff. Does not matter that VPN uses TLS and DNS under the hood and span across all 7 layers to encapsulate my data, functionally it is layer3 for me. I do a lot of corporate DNS trainings and people are surprised about this view. Software developers are unhappy that we start the "DNS course" with IP basics, unicast, anycast, 101 overview of BGP, ARIN, provider independed vs Provider Aggregatable address space. Developers would rather learn about DNS, not what an IP address is. Network guys sits happy on the course, but they are not happy to hear my view that: "there is no DNS without network and there is no network without a DNS" However, slowly but surely once the full training is finished, both sides understand that IP/IPv6 and DNS are in a symbiosis and can't be separated. We touch a lot of topics, including lack of interoperatibilty between IPv4 and IPV6, DNS64, XLAT464, etc. Sure, if you run an air-gapped network in some military facility, you can get things working without DNS, but 99.999% networks can't work without a DNS. > Sure, if you run an air-gapped network in some military facility, you can get things working without DNS, but 99.999% networks can't work without a DNS. It's funny you say that, because in the air-gapped network in some military facility I work in, we actually replicate certain public DNS entries to have some specific systems and services working for our developers. Of course, you want SNMP monitors get you and email as an alert, you have to setup auth DNS with xyz.local and use it in air-gapped environments for your alert mails. Hard drive failure?, RX/TX errors on a switch port? fire up mail over SMTP with an SNMP agent, etc. the list goes on and on. Network without DNS is un-useable. Apologies for my english, it is not my native language. > you have to setup auth DNS with xyz.local and use it in air-gapped environments for your alert mails True, but I was talking more about creating DNS entries for NPM (and other package managers) and redirecting them to our internal services. I just thought it was funny to mention since we impersonate public sites on our internal air-gapped network. Reminds me of using https://code.kryo.se/iodine/ ( DNS tunnel ) and a empty prepaid card... This is a nice PoC. For how long are TXT records usually cached? Triweb might not see the latest changes to the websites for quite some time. Thanks! The time for which TXT records are cached is determined by their TTLs. When publishing a DNS TXT record, you can usually set the TTL to be as low as 1 minute or even less, so any changes to the content would be picked up really quickly. (https://developers.cloudflare.com/dns/manage-dns-records/ref...) I know that it's all configurable, what I meant is that some DNS providers may apply a long TTL by default, and it may be desirable to let users of Triweb know about this possible source of confusion. The DNS name resolution is actually not done by your system or ISP, but by a DNS over HTTPs (DoH) service (Cloudflare 1.1.1.1 resolver). Cloudflare seems to be respecting the TTL as returned by the authoritative name server for your domain, so if you are able to specify a short TTL for a TXT record in your domain control panel, any changes made to the website should be live within that time. > so if you are able to specify a short TTL for a TXT record in your domain control panel Triweb users are able to do that, but they don't necessarily know that they need to do that. For example, the Cloudflare DNS dashboard displays "auto" as a TTL by default, and one has no idea how many seconds "auto" is for a TXT record. Looks interesting! What are the advantages (besides the mentioned ones) and what are the disadvantages? I think the main advantage is the ease and speed of use for people who want a simple, business-card-like website or a holding/landing page. The main disadvantage is that due to the UI/UX of domain control panels, managing anything more than a few paragraphs of text gets messy really quickly. But that could be actually an advantage, as the DNS is not particularly well suited to serve large amounts of data, and the app itself is meant for small and simple websites. Let's be honest here. The main advantage is to make use of someone else his resources. Since it's potentially degrading critical infrastructure and increasing the cost of market entry: Unethical. Sorry, but I don't understand your point. How is this "degrading critical infrastructure" and "increasing the cost of market entry"? It's also not using someone else's resources - DNS hosting is a web service like any other. You pay for it either as a part of domain registration/renewal cost (when DNS hosting is included in the registrar's offer) or separately (e.g., Route53 or "Advanced DNS" offerings). DNS is a Domain Name Service. Not a webservice. What you're not understanding is the way DNS works is the closest DNS server to you (usually your ISP unless you've configured it differently) usually caches the results and then serves those results for subsequent requests. So you are relying upon the server that is caching those results to serve your data. The intent of DNS TXT records was not to provide you with a content distribution network for free.