Debian to require Rust as of May 2026

142 min read Original article ↗

So much for "the universal operating system"?

Posted Nov 2, 2025 11:39 UTC (Sun) by Karellen (subscriber, #67644) [Link] (71 responses)

I kind of miss the days when Free Software was built to be as widely used as possible, and made so that it could run in as many places as possible.

Did you want this to run on a proprietary OS with half of the traditional libc missing, and half of the remainder really buggy? No problem, libiberty has your back with re-implementations of the stuff you need, and a few per-platform abstractions have you sorted.

Want your graphical app to run on X11, Windows and MacOS? Sure, our UI toolkit has backends for multiple graphical environments, and is happy to accept patches for new ones. Or, if it's too obscure, the architecture has been layered well enough that you can probably build and maintain your own private backend driver without too much churn from the toolkit end.

With the recent announcement that some future version of Gnome is going to drop all X11 support, it feels like we've lost more than just X11 support. Yes, Xorg is only barely receiving maintenance updates now, but Xorg is not the only X11 implementation ever written.

I know that supporting lots of backends is a more work, and, even with shims and replacements for missing parts some projects are kind of left working with lowest-common-denominator subsets of the underlying features they could be using... but it feels like the community used to be able to do more, at a time when it was much smaller with fewer people working in it.

At least `curl` still runs on over 100 OSs... :-)

So much for "the universal operating system"?

Posted Nov 2, 2025 12:04 UTC (Sun) by grawity (subscriber, #80596) [Link] (42 responses)

I do miss that. But my impression – I don't know if it is at all accurate, but that's somehow the impression I got – is that the traditional Free Software had *more* people back then, working on these particular projects, than it does now. More people who could dedicate more of their time to maintaining more stuff.

Maybe that's just what I got from seeing many of my "favorite" projects gradually dying. For example, a decade ago I'd cd ~/src/someproject and do 'git pull' and there would have been 100 new commits since the previous week, these days it's 1-2 per month (and half of those just automated "update github foobar action to v5").

GTK4 still has layered backends (and the macOS & Windows backends are receiving active development so I don't think it is planning to become Wayland-only). I wonder if they'll *really* drop the X11 backend for GTK5 – I assume even BSD users run a GTK-based app once in a blue moon.

So much for "the universal operating system"?

Posted Nov 2, 2025 13:02 UTC (Sun) by pizza (subscriber, #46) [Link] (7 responses)

> I don't know if it is at all accurate, but that's somehow the impression I got – is that the traditional Free Software had *more* people back then, working on these particular projects, than it does now. More people who could dedicate more of their time to maintaining more stuff.

You remember correctly. There were more people, working on a more diverse set of hardware platforms (and use cases)... because they wanted to use the hardware/environments they had at the time. Nowadays, anything other than x86_64, aarch64 (and perhaps soon, some flavor of riscv64) running on top of Linux are technical curiosities or museum pieces.

Another big change is that the hardware and software platforms have grown far, far more complicated, to the point where one person working on something as a hobby is rarely viable any more.

Free Software, like most infrastructure, is a victim of its own success... and the harsh reality that there is no money to be made maintaining infrastructure that everyone else depends on.

So much for "the universal operating system"?

Posted Nov 2, 2025 13:54 UTC (Sun) by Wol (subscriber, #4433) [Link] (2 responses)

> Nowadays, anything other than x86_64, aarch64 (and perhaps soon, some flavor of riscv64) running on top of Linux are technical curiosities or museum pieces.

And separate hardware from software. People who want to support old software can compile it, fix the "errors", and carry on.

But people who want to support old hardware/architectures? Hardware dies and can't be replaced, and you've just lost that guy to the project because he can no longer test anything. I've got a MIPS R3000 mini-computer in the garage I'd love to bring back to life - I don't know if it'll even boot any more, nor the state of the Prime 250 terminal or the Wyse terminal I need to talk to it. That was the chip in the Playstation One, and it has 8 x 4MEGAbyte Simms. And a 1GB SCSI hard drive.

Cheers,
Wol

So much for "the universal operating system"?

Posted Nov 2, 2025 14:36 UTC (Sun) by pizza (subscriber, #46) [Link] (1 responses)

> And separate hardware from software. People who want to support old software can compile it, fix the "errors", and carry on.

That presumes the [technical and legal] ability to recompile said software. But for those folks that are stuck with opaque binaries they wish (or need) to run, investing one's effort into emulating said hardware is always an option.

For example, an emulated m68k can easily run two orders of magnitude faster than the fastest silicon ever produced. To say nothing of the even more vastly improved I/O capabilities. This means that you typically have a _lot_ of headroom to model the rest of the hardware to a high degree of fidelity if you have a bare-metal type of system. For someting with a "Traditional" operating system. you can get away with only caring about the CPU, and translating the syscalls into host-native stuff. This is the approach that qemu-static takes.

This "emulate enough of the old system to allow old binaries to run unmodified (and usually faster)" approach has been successfully used by Apple three times, by IBM many more, and numerous other folks as well.

So much for "the universal operating system"?

Posted Nov 3, 2025 13:01 UTC (Mon) by farnz (subscriber, #17727) [Link]

You can also have an emulator that understands about calling native code with the correct ABI directly, and then only use emulation for the bits you can't rebuild. That allows you to use emulation for (e.g.) a plugin host that you can't change, switching to native code for plugins you can rebuild, or vice-versa.

So much for "the universal operating system"?

Posted Nov 3, 2025 5:32 UTC (Mon) by raven667 (subscriber, #5198) [Link]

> Nowadays, anything other than x86_64, aarch64 (and perhaps soon, some flavor of riscv64) running on top of Linux are technical curiosities or museum pieces.

I think this is very true, and as the dominant software platform Linux is moving forward with current and future computers that people are using and not in maintaining support to run on retro computers, even if the support is mature. I think there are other projects which _do_ very much want to keep those museum pieces working, like NetBSD, and instead of spending ones energy arguing for attention and resources from mainstream kernel/distro maintainers which aren't really invested in supporting your niche, spend that energy with a group whose whole thing is keeping those niches alive and maintained who will be glad for the help. It might feel like a bad break-up after spending so long with Linux but the BSDs are awesome too and I'm confident that people will find things they really like about them, instead of mourning the change if Debian drops an unmaintained architecture.

So much for "the universal operating system"?

Posted Nov 5, 2025 4:28 UTC (Wed) by wtarreau (subscriber, #51152) [Link] (2 responses)

> You remember correctly. There were more people, working on a more diverse set of hardware platforms (and use cases)... because they wanted to use the hardware/environments they had at the time. Nowadays, anything other than x86_64, aarch64 (and perhaps soon, some flavor of riscv64) running on top of Linux are technical curiosities or museum pieces.

Indeed, and let's not forget that the population using computers (and linux in general) has radically changed in the last 30 years. Originally they were computer passionate who wanted to see that software work on their system because they found it fun and pleasant. Nowadays, everything works out of the box. The vast majority of users don't even know what their boot loader is (nor even what a boot loader is). Thus it is very rare that trying to install linux on a computer results in this feeling of "hmm almost working, let's see if I can fix that" that many of us went through.

We're now literally seeing kids switching between distros based on the default appearance of the window manager. It's great that it has reached this level of ease of use, but at the same time it doesn't require to understand anything and there are so many stacked pieces that even for those working on these software, any customization becomes way more complicated than it used to. So it's hard to blame new users for not being willing to engage into customizing, porting and patching a system that's hard to understand when it takes one minute on a gigabit fiber to download an alternate distro, flash it on a thumb drive, and 5 extra minutes to install it.

It just makes me wonder who will be going through that software maintenance over time, and I guess that in a few decades linux will likely only exist as commercial distros who have an incentive to do that maintenance. What we're seeing with distros dropping support for legacy stuff that has worked for decades is only a preview of how our choices of software will progressively shrink to the point of accepting the last supported environment and set of software/languages maintained by some distro and that we're lucky to still be allowed to download for free for personal use. It will then change again when some users want to regain some freedom and will be called "hackers" for daring rebuilding from sources :-)

So much for "the universal operating system"?

Posted Nov 5, 2025 11:22 UTC (Wed) by pizza (subscriber, #46) [Link] (1 responses)

> It just makes me wonder who will be going through that software maintenance over time, and I guess that in a few decades linux will likely only exist as commercial distros who have an incentive to do that maintenance.

"Linux" I think will be fine, though once Torvalds is no longer with us I see it inevitably fragmenting around commercial siloed interests. Everything else, on the other hand... XKCD #2347 is gonna hit *hard* when the current crop of maintainers dies off and we end up in a "The machine stops" [1] scenario where everything is mostly fine.. until it suddenly isn't.

[1] https://en.wikipedia.org/wiki/The_Machine_Stops

Loss of maintainers and XKCD #2347

Posted Nov 5, 2025 14:06 UTC (Wed) by farnz (subscriber, #17727) [Link]

I suspect that it's going to be more like what has already happened in banking and other industries that adopted the IBM S/360 early; some companies will rewrite to migrate away from the thing that depends on someone who's died, others will pay more and more to keep the existing binaries working.

The open question is whether that will fragment into lots of separate paid groups, each charging for private fixes to keep the old systems going, or whether it'll be shared fixes the way Red Hat has historically done, funded by the paying customers who can't afford for their systems to fail.

So much for "the universal operating system"?

Posted Nov 2, 2025 17:29 UTC (Sun) by josh (subscriber, #17465) [Link] (31 responses)

> I do miss that. But my impression – I don't know if it is at all accurate, but that's somehow the impression I got – is that the traditional Free Software had *more* people back then, working on these particular projects, than it does now. More people who could dedicate more of their time to maintaining more stuff.

I don't think that's the case. FOSS has far more people now than it did then, but I think one difference is that they're working on more projects and on larger projects and similar, and user expectations are higher, and bandwidth is limited, and something has to give.

Having a distribution full of software run on a variety of different architectures (or OSes or init systems or other variations) is a huge amount of work, and growing all the time since there's more software and bigger software. If that work is going to happen at all, someone has to do that work.

Historically, one way of doing that is to push some of the work onto every individual package maintainer, which leaves less work for the maintainers of the port to that architecture/OS/etc. However, that assumes that the maintainers are all willing to do that work, and amortize the cost of those ports over all the packages, displacing other things they could be doing with their time.

That assumption doesn't always hold anymore, and there is a threshold of niche-ness for which many people aren't necessarily willing to do work on behalf of that niche. That's especially true if the port is less capable, or missing features, or missing dependencies.

To quote and generalize Rust's Target Tier Policy (https://doc.rust-lang.org/nightly/rustc/target-tier-polic...), which I wrote to address this exact problem that projects encounter:

> Tier 2 and tier 1 targets place work on [...] project developers as a whole, to avoid breaking the target. The broader [...] community may also feel more inclined to support higher-tier targets [...] (though they are not obligated to do so). Thus, these tiers require commensurate and ongoing efforts from the maintainers of the target, to demonstrate value and to minimize any disruptions to ongoing [...] development.

Not every port has the same priority. It's not that people are *unwilling* to have code work on a niche target that hasn't had new hardware for decades; it's that people are not going to subsidize the cost of that work for the target maintainer. Some targets are *everyone's* responsibility; some targets are *the target maintainers'* responsibility.

In this case, the maintainer of apt is saying "I'm not going to do this work for you, and I'm only going to wait so long for you to catch up before you're left behind".

Let the old hardware go to museums

Posted Nov 2, 2025 17:53 UTC (Sun) by DemiMarie (subscriber, #164188) [Link] (20 responses)

Old hardware may definitely be of sentimental value, but practical for serious work it is not. The only exception I know of is embedded devices, but those run old software as well.

Let the old hardware go to museums

Posted Nov 3, 2025 5:35 UTC (Mon) by jmalcolm (subscriber, #8876) [Link] (1 responses)

What is "serious work"?

Books that have generated billions in revenue have been written in WordStar on DOS. You could do that on a 40 year old computer.

I have a 15 year old computer that I use almost daily. It does office stuff and runs a web browser just great. I can deploy massive infrastructure to the cloud with it just fine (eg. Terraform). I can interface with cloud based LLMs to perform pretty amazing and modern feats. All the processing is happening on the other side of the network.

Most cybersecurity work requires extremely modest hardware. You can do a lot of pretty "serious" work on old kit.

Embedded development can be done on hardware with pretty meagre resources. Selling the resulting embedded hardware can be pretty big business.

And let's be honest. A lot of "serious" management work is Word, Excel, PowerPoint, and Outlook (or their equivalents). Fairly ancient computers serve perfectly fine for this.

Not everybody is video editing, running massive IDE apps, or training AI.

Let the old hardware go to museums

Posted Nov 3, 2025 8:57 UTC (Mon) by taladar (subscriber, #68407) [Link]

Serious work is the kind where you don't just make your own life harder for no rational reason because you consider working with obsolete architectures fun. Serious work pragmatically uses the dominant architectures now because they are cheap, easy to buy and well supported.

Let the old hardware go to museums

Posted Nov 3, 2025 5:37 UTC (Mon) by jmalcolm (subscriber, #8876) [Link] (3 responses)

What is "serious work"?

Books that have generated billions in revenue have been written in WordStar on DOS. You could do that on a 40 year old computer.

I have a 15 year old computer that I use almost daily. It does office stuff and runs a web browser just great. I can deploy massive infrastructure to the cloud with it just fine (eg. Terraform). I can interface with cloud based LLMs to perform pretty amazing and modern feats. All the processing is happening on the other side of the network.

Most cybersecurity work requires extremely modest hardware. You can do a lot of pretty "serious" work on old kit.

Embedded development can be done on hardware with pretty meagre resources. Selling the resulting embedded hardware can be pretty big business.

And let's be honest. A lot of "serious" management work is Word, Excel, PowerPoint, and Outlook (or their equivalents). Fairly ancient computers serve perfectly fine for this.

Not everybody is video editing, running massive IDE apps, or training AI.

Let the old hardware go to museums

Posted Nov 3, 2025 14:53 UTC (Mon) by epa (subscriber, #39769) [Link] (2 responses)

Surely your 15 year old computer is using the same architecture as the most popular systems now (I would guess x86_64). We are talking about architectures which even in 2010 would have been considered retro.

Let the old hardware go to museums

Posted Nov 4, 2025 15:22 UTC (Tue) by willy (subscriber, #9762) [Link] (1 responses)

Your timeline is a little off. Tukwila (Itanium 9300) was released in 2010, followed by Poulson in 2012 and Kittson in 2017.

Let the old hardware go to museums

Posted Nov 8, 2025 7:42 UTC (Sat) by anton (subscriber, #25547) [Link]

But jmalcolm's 15 year old computer is very likely an AMD64 machine and not an Itanium machine. Very few Itanium 95xx (Poulson) or 97xx (Kittson) machines are for sale, which means that either everybody who has one holds on to it, or very few have been manufactured and sold in the first place. My guess is: both. And the low number of Itaniums produced and in the used-computer market also means that few hobbyists have them and so there are not that many people who would continue to do any porting work to this architecture.

Let the old hardware go to museums

Posted Nov 8, 2025 7:23 UTC (Sat) by anton (subscriber, #25547) [Link] (13 responses)

Whether old hardware is practical for serious work depends on the work. In some cases it is not just sufficient, but also necessary for the work. E.g., for our research (paper, especially figures 11 and 12) we measured performance effects on microarchitectures going back to the K8 (originally released 2003) in order to understand why a particular optimization had little effect in earlier times, and gives speedups by up to a factor of 3 on recent hardware.

A collegue of mine is still using a pair of Pentium III machines from 2000 because newer machines don't work for the setup he uses.

Admittedly, both examples are not about any architectures that are in danger of being unsupported by Debian, but I am sure that there are also people who do serious work with those machines. Every year or so I turn on our Alpha 21264 machine in order to measure this or that, although nothing as serious as doing it for a paper, yet.

Let the old hardware go to museums

Posted Nov 13, 2025 9:23 UTC (Thu) by dvdeug (subscriber, #10998) [Link] (12 responses)

Every year or so you turn on your Alpha. Is it even worth your time to keep upgrading it to the latest version of Debian every other time you turn it on?

And you're asking people to spend time to keep programs running on a system that you turn on once a year? How much are you willing to pay for an external support contract for that system? If the answer to that is "no", why should other people let that be a blocker to advancing their systems? You could pay someone what it costs to port Rust to Alpha; if it's not worth it to you, why should it be worth it to somebody who doesn't own an Alpha?

There's a quarter million systems responding to Debian's popcon, of which 3 are Alpha. There's about an 80,000 to one ratio of popcon systems running x86-64 versus Alpha. It seems a bit unreasonable to block development at that point, especially given that the ratio is just going to increase; Alpha is old, slow hardware that had around a half million systems ever made. There's around a quarter billion new x86-64 systems a year.

Let the old hardware go to museums

Posted Nov 16, 2025 16:59 UTC (Sun) by anton (subscriber, #25547) [Link] (11 responses)

For the work I do with the Alpha, I don't need the latest version of Debian, so I don't upgrade it.
And you're asking people to spend time to keep programs running on a system that you turn on once a year?
If this is a question, the answer is no. Remember that the claim was that old hardware is not practical for serious work. That's wrong. Some work needs old hardware. My work does not need up-to-date software on the old hardware (except my own); this may be different for others.

Concerning the title, we spend money on museums exactly to be able to research old artifacts. Museums are not there for letting old artifacts rot.

I do my part by working on keeping the software I work on available on old hardware and otherwise fringe systems in addition to the mainstream.

The systems that I port to do not include s390x, because we don't have one and there is none at cfarm.net. Which poses the question: How come Debian has s390x as official port? There probably have been and probably will ever be far fewer made than Alphas; in popcon there are 17 s390x systems, ten times fewer than the 173 armel systems (armel will be dropped from the official ports with Debian 14), three times fewer than the 51 powerpc systems (powerpc has been dropped from the official ports with Debian 9), fewer than the 40 powerpc64 systems (never an official port).

Let the old hardware go to museums

Posted Nov 17, 2025 4:22 UTC (Mon) by mathstuf (subscriber, #69389) [Link] (7 responses)

> <s390x popcon results>

In my (limited) understanding of s390x machines and their use cases…why would the "important" deployments of such machines be configured to report to popcon?

s390x, ppc64le, and popcon

Posted Nov 17, 2025 8:15 UTC (Mon) by anton (subscriber, #25547) [Link] (6 responses)

Why would anyone configure their machines to report to popcon?

I doubt that s390x and ppc64le made it to the Ubuntu, Suse and Red Hat supported architectures by popular demand, or because so many s390x and powerpc64le users pay them directly for it, so that leaves IBM as paying them (or, for Red Hat, maybe ordering them) to do it. The commercial Linux distributors and IBM also have an interest that Debian volunteers and upstream developers get the impression that s390x and ppc64le are relevant, and popcon is a good way to achieve that (see above), so better turn popcon on for the build VMs (or real hardware) for these platforms that run Debian.

Orders of magnitude fewer ppc64le machines have been built than ppc machines (and more ppc64 machines have been built than ppc64le machines), and yet powerpc64le popcon reporting currently is 1/5 of the maximum ever reported for ppc, and more than twice as much as the highest number ever reported for ppc64; and even ppc64 reporting was in the range 1-2 up to 2016 (i.e., long after the PowerMac G5 was discontinued), and only picked up starting in 2017. Given the low numbers of real hardware, my guess is that the increase in ppc64 and ppc64le popcon reporting does not reflect an increase of Debian installations on these systems, but just an increase in the percentage of such hardware (or VMs) with Debian that reports to popcon. Maybe there are images for Debian on these architectures that have popcon turned on.

A related popcon result is for the various qemu packages: qemu-system-ppc currently has 1111 votes, qemu-system-s390x has 668, and qemu-system-misc (which supports Alpha, HPPA, m68k, and others) has 735.

As for the "important" deployments of s390x, I don't expect any of them to run Debian, so they naturally don't report to popcon.

s390x, ppc64le, and popcon

Posted Nov 17, 2025 9:29 UTC (Mon) by Wol (subscriber, #4433) [Link] (5 responses)

The other "little" thing is that PCs are mostly single-user. IBMs are multi-user so, while it's unlikely, it's entirely plausible that the 360 et al ecosystem has more users than the pc ecosystem.

Okay, I think this was a z800 (different mainframe, same family, still many moons ago), but some Scandinavian ISP was buying them to run linux VMs on. Once the user count hit about 1500 it was (a) not stressing the machine, and (b) (much) cheaper than a rack of PCs. And (c) might well have mis-reported as x86 because I believe it was mostly running pc emulators.

Cheers,
Wol

s390x, ppc64le, and popcon

Posted Dec 2, 2025 9:25 UTC (Tue) by anton (subscriber, #25547) [Link] (4 responses)

That depends on how you define "user". If the owner of a bank account or a credit card is a "user" of a mainframe that processes the bank account or credit-card data, then maybe yes. If you define as users those who visit a web page, then no (I don't think that many web pages are hosted on s390x).

Gitlab reports the number of accounts on a gitlab server as "users". With that, our gitlab server has 3747 users. It runs on a PC (a Ryzen 3900X machine with 128GB RAM). That's not stressing the machine.

A traditional notion of user in a Unix/Linux system is someone who has an account in account in /etc/passwd that has an actual login shell (to exclude all the system accounts). For this notion, one of our "PCs" (a Socket-1200 machine with an 8-core Xeon E-2388G and 128GB RAM) has 2080 users. That's not stressing the machine. I doubt that there is any s390x system with that many users. In 2002 when the z800 was released, we used an Alpha (with one CPU) for the same purposes, with similar numbers of users; now this machine is claimed by some to be too slow for anything.

Concerning the z800 with its 4 Blue Flame CPUs (plus one Blue Flame doing something else) running at 625MHz available from 2002, I very much doubt that it would be competitive with in performance to a 4-socket Xeon machine from that vintage (probably not even with a 2-socket Xeon or Athlon MP machine) when running the same software, and definitely not when running the same software in a PC emulator.

There is a reason why IBM does not allow reporting benchmark results from IBM mainframes, and it's not because the benchmark results are so great that they would convince more people to buy mainframes instead of PCs.

Mainframe advantage

Posted Dec 2, 2025 9:59 UTC (Tue) by farnz (subscriber, #17727) [Link] (2 responses)

Historically (and I do not know if this is still the case - it was when Linux on IBM Z was new), the mainframe's advantage as a Linux box was an I/O subsystem and memory capacity like nothing else on the market, coupled with adequate processing. If your loads were I/O bound, or bounded by memory capacity, then a mainframe for Linux VMs could make financial sense.

But on processing power per $, the only way the mainframe made sense for Linux VMs is if you had "spare" capacity on the mainframe - because you had enough hardware to cover an annual peak, or because hardware comes in discrete lumps, and the next size down is too small. At that point, paying IBM to let you use the hardware for Linux is better value than adding another box to the network.

If this still holds true, then a mainframe will be competitive with a Xeon or EPYC for processing performance, but not better; where it'll win is in maximum system RAM and in I/O capacity. And I can see that in a VM hosting situation, RAM and I/O are your normal bottlenecks, not processing power, and thus the IBM mainframe could be a competitive answer - because you're able to widen the bottlenecks further than Intel or AMD machines of the same era, even though the processing power isn't there.

Mainframe advantage

Posted Dec 2, 2025 12:31 UTC (Tue) by anton (subscriber, #25547) [Link] (1 responses)

Memory capacity, yes: If you need more than the 12TB RAM that a dual-socket EPYC or Xeon can support directly, z17 is advertized as supporting up to 64TB RAM. There may be ways to grow the EPYC or Xeon machine further with CXL memory, and maybe some manufacturers offer systems beyond the two sockets that AMD and Intel support out of the box.

I/O performance: I believe it when I measure it. And given that IBM does not allow publishing I/O benchmarks, I/O performance is probably nothing to write home about, either.

I don't see why RAM capacity would be a win in a VM hosting situation. If you have more VMs than fit in the 6TB of a single-socket EPYC or Xeon (i.e., >1500 4GB VMs), you add another piece of cattle to your server farm. And 11 of those systems are probably cheaper than a 64TB z17, and they have a lot of additional CPU power.

Concerning I/O capacity, IBM's z17 data sheet does not specify I/O capacity in any comparable way ("Maximum number of I/O drawers: 12"? Come on!). In any case, I doubt that the I/O capacity of a maximal z17 system exceeds the combined I/O capacity of 11 EPYCs or Xeons in appropriate boards.

So no, aggregating many VMs on a single IBM mainframe rather than a few EPYCs or Xeons makes no sense. OTOH, if you have a workload in a single instance that exceeds the 12TB maximum of a two-socket EPYC or Xeon machines, a z17 may make sense. Likewise if a single instance needs more I/O capacity. I don't expect that there are many people who have these requirements and can afford an appropriately configured z17, however.

Mainframe advantage

Posted Dec 2, 2025 13:08 UTC (Tue) by farnz (subscriber, #17727) [Link]

A single I/O drawer is roughly the same amount of PCIe bandwidth as a single EPYC socket. Something with 12 drawers is equivalent to 12 EPYC CPUs for I/O bandwidth - but also has an offload processor on the drawer that can accelerate some I/O operations (not normally usable by Linux).

And in the past, it made sense because IBM drawers haven't scaled particularly quickly, whereas x86 systems have. When a 4 socket Xeon came with 4 PCI-X 66 MHz 64 bit slots, the mainframe's I/O subsystem absolutely slaughtered it; even today a single mainframe is more capable I/O wise than 6 dual socket EPYC systems (but not 7).

Pricing is also variable on the mainframe, which makes comparisons hard; if IBM wants to, they can offer you a highly configured z17 for much less money than list price, with restrictions imposed by licensing to avoid you eating into IBM's margins.

For VM hosting, being able to pack more VMs into a system means that you can get closer to saturating a system. If I'm selling you VMs of between 256 MiB and 1 TiB RAM, I get a bin packing problem (NP-hard) as soon as I have multiple physical systems; there will be capacity that I'm leaving idle on a system because, while a better arrangement will pack onto fewer systems, it's hard to find that perfect packing.

There's also the thing about who writes the management code; for the mainframe, IBM provides the RAS software for you, whereas for Linux, you often just get the parts you need to build your own live migration system, chipkill handling etc.

I can thus see that if you get a good deal on a mainframe for VMs from IBM, it's cheaper than building your own platform out of x86-64 systems, even today. And note that IBM do a significantly cheaper variant on the mainframe that isn't licensed for things other than Linux VMs; it's not beyond the realms of possibility that IBM will offer a VM hoster a very good deal on such a machine, just for the PR win of "this hoster says their mainframe cluster is cheaper than an EPYC cluster would be/was".

s390x, ppc64le, and popcon

Posted Dec 2, 2025 10:04 UTC (Tue) by Wol (subscriber, #4433) [Link]

In this particular instance ... I define user as "a telco customer renting a virtual machine", whatever that virtual machine was running.

As for IBM not allowing benchmarks, do you remember MS's anti-linux advertising campaign from that era? I seem to remember it ran for one week before it ran foul of our "legal, decent, honest, truthful" requirement and got pulled for something like "intentionally designed to mislead". It basically compared a z800 to a Xeon 800 with loads of misleading claims.

Oh, and yes, mainframes have always been woefully underpowered in the CPU department. But in the i/o department? Comparing a pc to mainframe is like comparing a Formula-1 to an Artic (semi to you Americans). Depends what you want it for.

Cheers,
Wol

Let the old hardware go to museums

Posted Dec 2, 2025 0:33 UTC (Tue) by dvdeug (subscriber, #10998) [Link] (2 responses)

So there's no actual value in keeping Debian running on Alpha, then. Computer museums are going to be running period-accurate OSes, not the latest and greatest.

> My work does not need up-to-date software on the old hardware (except my own); this may be different for others.

Which isn't much of an argument for keeping software up-to-date on Alpha.

The RISC-V port of Debian is constantly getting bug reports that stuff won't building on it, solely because the timeouts on test cases are too short for the system. A package named Trilinos has spent 8 days building for RISC-V. And RISC-V is faster than most of the old hardware, and will be getting faster in the future.

Trilinos provides another example; it has a mysterious build failure on Alpha, as Debian still builds Alpha packages. If people want Alpha to supported in Debian, stuff like that needs to be fixed. There's a RISC-V team that fixes such things.

> The systems that I port to do not include s390x, because we don't have one and there is none at cfarm.net. Which poses the question: How come Debian has s390x as official port?

That's a problem, and if it turns out too many packages don't build for s390x, it'll be dropped as an official port. But a look at the debian-s390x mailing list archive shows that issues posted there get a response promptly. If s390x can't keep up, then it will have to be removed from release critical. But for now, s390x is not dumping a lot of work on non-s390x developers.

Let the old hardware go to museums

Posted Dec 2, 2025 8:42 UTC (Tue) by anton (subscriber, #25547) [Link] (1 responses)

So there's no actual value in keeping Debian running on Alpha, then.
That does not follow from what I wrote. However, currently there is no value for me.
And RISC-V is faster than most of the old hardware, and will be getting faster in the future.
RISC-V is an architecture and by itself is not faster or slower than any other architecture. As for implementations, here are some results from a LaTeX benchmark
3.28s UP1500 21264B 800MHz 8MB L2 cache, RedHat 7.1 (b1)
5.492 Starfive Visionfive JH7100 (1 GHz U74) Fedora 33 (TexLive 2020)
So on this benchmark this particular Alpha implementation (21264B) is faster than this particular RISC-V implementation (U74). From what I read, build slowness on RISC-V may also be due to the particular choices made for RISC-V concerning object files and linking (I remember seeing examples of long link times for RISC-V somewhere, but if it was on that page, it is no longer there).

Let the old hardware go to museums

Posted Dec 2, 2025 22:44 UTC (Tue) by dvdeug (subscriber, #10998) [Link]

> RISC-V is an architecture and by itself is not faster or slower than any other architecture.

"And RISC-V [hardware] is faster than most of the old hardware, and will be getting faster in the future."

>So on this benchmark this particular Alpha implementation (21264B) is faster than this particular RISC-V implementation (U74).

So you're pitting one of the fastest Alphas ever made against a old and slow RISC-V implementation, and show that the Alpha is about 60% faster. But newer RISC-V hardware comes up twice as fast on single core in benchmark, and orders of magnitude faster on multicore. Lies, damned lies, and benchmarks, of course, but multicore is more like what a build daemon is going to be doing than single core. And RISC-V will get faster yet, or people will stop caring about it and RISC-V Debian will go the way of Alpha Debian.

So much for "the universal operating system"?

Posted Nov 2, 2025 19:37 UTC (Sun) by glaubitz (subscriber, #96452) [Link] (9 responses)

> In this case, the maintainer of apt is saying "I'm not going to do this work for you, and I'm only going to wait so long for you to catch up before you're left behind".

Debian doesn't revolve around APT (there is an alternative called aptitude) and Julian can only speak for himself and Canonical, but not for Debian as a whole.

He is certainly not the person to say whether a port is abandoned or not and as someone directly addressed by his original mail, I'm not particularly happy about his wording to say it politely.

Rust is a great language and project. But the way some Rust proponents are trying to push the language is not particularly inclusive and welcoming.

Investing energy into ports

Posted Nov 2, 2025 21:06 UTC (Sun) by josh (subscriber, #17465) [Link] (8 responses)

> Debian doesn't revolve around APT (there is an alternative called aptitude)

aptitude depends on libapt, and seems unlikely to have broader support than apt does.

> He is certainly not the person to say whether a port is abandoned or not

He can, on the other hand, say how long he's going to hold back on potential changes to wait for ports to be ready for those changes.

> I'm not particularly happy about his wording to say it politely.

It is not obvious that *any* wording would have satisfied people who don't like the timeline. That said, it's true that "sunset the port" was a false dichotomy, as there's a third option: a port could stay on an old version of apt (and other software) indefinitely, as long as there's bandwidth to do security fixes. I imagine that many people do not regularly talk about or think about that option, but it's worth acknowledging.

There are many ways that people invest energy in keeping a port alive. One is to do work to support current technology; those who work on that are greatly appreciated. The other is to invest energy in trying to slow others down to make their work easier and more tractable. The latter tends to happen when there aren't enough people working on a port to do the former, and it produces a drag force and stress across many communities, and is a major origin of backlash and resentment towards "retrocomputing".

Old hardware can be awesome. Emulation of old hardware can be awesome. Keeping old software alive and preserved can be awesome. Two of the many things that make people stop seeing that as awesome: the expectation that others help with those ports whether they're interested or not, and the expectation that others slow down their work to give those ports time to catch up. Retrocomputing projects that don't do either of those things can be delightful and fun.

This is one reason why it's important to set expectations up front, about which targets are everyone's shared responsibility and which targets are the sole responsibility of those working on them.

Investing energy into ports

Posted Nov 3, 2025 21:01 UTC (Mon) by glaubitz (subscriber, #96452) [Link] (7 responses)

> > He is certainly not the person to say whether a port is abandoned or not

> He can, on the other hand, say how long he's going to hold back on potential changes to wait for ports to be ready for those changes.

No, he cannot. He has not a single word to say in Debian Ports.

> There are many ways that people invest energy in keeping a port alive. One is to do work to support current technology; those who work on that are greatly appreciated. The other is to invest energy in trying to slow others down to make their work easier and more tractable. The latter tends to happen when there aren't enough people working on a port to do the former, and it produces a drag force and stress across many communities, and is a major origin of backlash and resentment towards "retrocomputing".

> Old hardware can be awesome. Emulation of old hardware can be awesome. Keeping old software alive and preserved can be awesome. Two of the many things that make people stop seeing that as awesome: the expectation that others help with those ports whether they're interested or not, and the expectation that others slow down their work to give those ports time to catch up. Retrocomputing projects that don't do either of those things can be delightful and fun.

> This is one reason why it's important to set expectations up front, about which targets are everyone's shared responsibility and which targets are the sole responsibility of those working on them.

I'm not really interested in answering this. I have to say I'm starting to feel sorry for the time I have invested in the Rust project. I should have spent it elsewhere where people are more grateful.

Adrian

Investing energy into ports

Posted Nov 4, 2025 9:39 UTC (Tue) by Wol (subscriber, #4433) [Link] (3 responses)

> > He can, on the other hand, say how long he's going to hold back on potential changes to wait for ports to be ready for those changes.

> No, he cannot. He has not a single word to say in Debian Ports.

Who is actually doing the work? Who is blowing hot air?

> I'm not really interested in answering this. I have to say I'm starting to feel sorry for the time I have invested in the Rust project. I should have spent it elsewhere where people are more grateful.

I'm getting the impression that there's more hot air than actual work in Debian Ports. Am I right?

Cheers,
Wol

Investing energy into ports

Posted Nov 4, 2025 11:33 UTC (Tue) by glaubitz (subscriber, #96452) [Link] (1 responses)

> > He can, on the other hand, say how long he's going to hold back on potential changes to wait for ports to be ready for those changes.

>> No, he cannot. He has not a single word to say in Debian Ports.

> Who is actually doing the work? Who is blowing hot air?

I have made a fair amount of upstream contributions to Rust and other toolchains.

I actually enabled Rust for several architectures within Debian and made it possible that Debian could start adopting rustified packages such as librsvg.

What was your contribution?

>> I'm not really interested in answering this. I have to say I'm starting to feel sorry for the time I have invested in the Rust project. I should have spent it elsewhere where people are more grateful.

> I'm getting the impression that there's more hot air than actual work in Debian Ports. Am I right?

No, you're just being very condescending without bothering to do some research.

Investing energy into ports

Posted Nov 4, 2025 15:04 UTC (Tue) by Wol (subscriber, #4433) [Link]

Apologies, they were genuine questions.

> No, you're just being very condescending without bothering to do some research.

Which was why I asked the questions! :-)

It looks like I had completely the wrong impression.

> What was your contribution?

To Rust, nothing (which was why I knew nothing). And I avoid Debian, so I know precious little about that.

I have contributed a little elsewhere.

Cheers,
Wol

Investing energy into ports

Posted Nov 4, 2025 11:53 UTC (Tue) by josh (subscriber, #17465) [Link]

> Who is actually doing the work? Who is blowing hot air?

Please don't say that to a developer who has actually put their time and energy where their mouth is, and worked extensively to try to port Rust to more targets and deal with resulting issues that arise. Rust wouldn't run on as many targets as it does without his work.

We'd be better off if more people invested time and energy like that, trying to make Rust support their targets, and fewer people spent time trying to discourage others from using Rust at all so that they don't have to put in that effort.

Investing energy into ports

Posted Nov 4, 2025 11:12 UTC (Tue) by hailfinger (subscriber, #76962) [Link] (1 responses)

I am thankful for the time you invest into ports and the effort you spend on getting modern languages running on retro hardware.
I am looking forward to running them on the various non-x86 older machines I acquired over time. There's also the benefit of uncovering implicit assumptions in C code (signed vs. unsigned char etc.) if code gets compiled for less common architectures.

Investing energy into ports

Posted Nov 4, 2025 11:35 UTC (Tue) by glaubitz (subscriber, #96452) [Link]

Thank you very much for your kind words! This actually is wholesome to hear after the rather unpleasant comments I had to read from others.

Investing energy into ports

Posted Nov 4, 2025 12:01 UTC (Tue) by farnz (subscriber, #17727) [Link]

I have to say I'm starting to feel sorry for the time I have invested in the Rust project. I should have spent it elsewhere where people are more grateful.

I'm sorry to hear that people have been ungrateful for your efforts; while I personally don't care about m68k, your work to keep it functional (including getting m68k-unknown-linux-gnu to Rust Tier 3 status) is genuinely appreciated here - you show that keeping old hardware alive isn't a matter of "stop new changes", but rather "do the hard work that no-one else does so that we can keep up".

We need people like you in FOSS, who do the hard work to keep things relevant, not people who fight to stop change of any sort.

Growth of Debian & probable effect on number of people supporting

Posted Nov 3, 2025 5:00 UTC (Mon) by jjs (guest, #10315) [Link]

I strongly suspect there's even more people now supporting F/LOSS. But they're spread across even more projects. See my other posting, but short answer: in 2002, Debian had 4,500 packages, 105 million lines of code, in Trixie (the just released stable), there's 69,830 packages, 1,463,291,186 lines of code. That's 15.5 times the number of packages, almost 14 times the number of lines of code, in 23 years of growth. That's just Debian.

So much for "the universal operating system"?

Posted Nov 3, 2025 15:14 UTC (Mon) by nim-nim (subscriber, #34454) [Link]

Traditional Free Software (whatever that means) was an economy of scarcity, you ported to any hardware you had access to because there was damn few software to port and the hardware was damn expensive. You were lucky to procure the source code to port (that someone painfully distributed) and you were lucky to have access to the hardware, if you had access to anything it was almost certainly maintained by a poor sod with the same access limitations.

Current software economy is a glut economy, hardware is cheap as long as you do not indulge in AI, software blocks are plentiful and available in many languages. The difficulty now is to read the winds correctly and choose hardware and software that will attract other contributors long term (because is is well maintained, well licensed, by people not feuding with half the internet). Performance is getting marginal, you can always buy a more powerful CPU, but maintaining the babel tower alone because everyone else has moved to greener pastures is a Sisyphean task.

Trying to emulate 90's *nix environment in a Windows workstation when all the techs have access to real Linux/BSD systems and have little interest in re-enacting the past is a complete misreading of the winds. That’s equivalent to trying to procure a vintage steam car. Some (very few) people will be ready to help, it will be very expensive, and have no relevance to today’s tech.

So much for "the universal operating system"?

Posted Nov 2, 2025 14:40 UTC (Sun) by dskoll (subscriber, #1630) [Link] (6 responses)

I agree. Specifically in the X11 case, I think X11 support is being dropped prematurely, before Wayland is a fully-ready replacement.

And if you want an OS whose primary goal is portability and support of a huge variety of architectures... that's NetBSD.

So much for "the universal operating system"?

Posted Nov 2, 2025 15:00 UTC (Sun) by Wol (subscriber, #4433) [Link] (5 responses)

> I think X11 support is being dropped prematurely,

IN PRACTICE wasn't X11 support dropped many years ago? X11's HAL has been effectively dead almost since the dawn of Wayland, no?

Cheers,
Wol

So much for "the universal operating system"?

Posted Nov 2, 2025 15:04 UTC (Sun) by dskoll (subscriber, #1630) [Link] (2 responses)

I run the XFCE4 desktop under X11 and it still works fine. There is IMO no need to drop X11 support from graphical toolkits yet; I doubt the burden of keeping X11 support is all that high given that X11 is basically not changing any more.

So much for "the universal operating system"? - maintaining old software and X11

Posted Nov 2, 2025 17:28 UTC (Sun) by amacater (subscriber, #790) [Link]

As regards X11 support - I think the answer is that the same people that you'd rely on to keep X11 running are the ones who are working on Wayland and that there aren't nearly enough maintainers for anything.
The enterprise distributions are moving on to containers everywhere and AI - I'm not sure that anyone *cares* about
X11 per se and the number of people running Linux as their OS for daily life is minimal by comparison.
For many of them, GNOME or similar will do under Wayland. If you want to run other underpinnings / other desktops and window managers, you *will* end up tweaking and maintaining them yourself or with a very small coterie of other interested users.

This applies even if your hardware is new and fully supported and much more so to people trying to keep decade old hardware / niche architectures alive. See also discussions round keeping i386 hardware relevant here and elsewhere, for example.

So much for "the universal operating system"?

Posted Nov 2, 2025 18:02 UTC (Sun) by josh (subscriber, #17465) [Link]

> I doubt the burden of keeping X11 support is all that high given that X11 is basically not changing any more.

The problem is that other things are changing around it. Maintaining a port to X11 is still ongoing work, which adds drag and technical debt to every change made that needs to handle "and how does this work on the X11 port" (even if the answer is "it doesn't").

It's easier (not easy, but *easier*) to support X11 forever, on the same hardware forever, by running old software forever. It's *not* easy to support X11 forever, on new hardware, with new software. So people are reasonably saying "if you want to keep running old software, you can keep running other old software".

So much for "the universal operating system"?

Posted Nov 2, 2025 18:25 UTC (Sun) by Jandar (subscriber, #85683) [Link] (1 responses)

I'm forced to use a Windows laptop for work and use a multi-monitor RDP session to a dedicated headless Linux VM on premises running KDE. Easy with X but impossible with Wayland.

On X multi-monitor RDP has to be implemented once, as I understand it on Wayland every compositor has to implement it on it's own.

I hope KDE supports X until I retire.

So much for "the universal operating system"?

Posted Nov 3, 2025 8:05 UTC (Mon) by ebee_matteo (guest, #165284) [Link]

That is true. It is a missing feature. However there are at least three approaches:

* Keep running older software, and/or maintain a fork (expensive and hard, swimming against stream)
* Try to be vocal about it and convince others to stay with an old stack largely viewed as hard to maintain and understand (tricky, as most of the people doing the work on XOrg itself are against it)
* Work together with the Wayland maintainers to implement the missing features (working together with the community, less fragmentation, sounder technical foundations)

I know which one I'd pick.

So much for "the universal operating system"?

Posted Nov 2, 2025 19:07 UTC (Sun) by jem (subscriber, #24231) [Link]

This looks like an XY problem: How can I use X(11) to do Y?

Do you really need X11, or is what you really want a display server that supports the familiar GUI libraries like Qt and GTK, plus your favorite apps? Besides, as far as I know, XWayland is not going away in the foreseeable future.

So much for "the universal operating system"?

Posted Nov 2, 2025 19:58 UTC (Sun) by jlarocco (subscriber, #168049) [Link] (12 responses)

It is a bummer. Between Systemd, Wayland, and the push to put Rust everywhere, a lot of people using less popular software and hardware have gotten the rug pulled out from them lately.

On the other hand we probably shouldn't hold up progress because of a small number of users on novelty platforms.

In this case, it's not like the platforms are getting banned or removed, there's just nobody doing the work to keep them working.

I guess the alternative is to disagree with the direction and fork, but nobody's upset enough to do that, either.

So much for "the universal operating system"?

Posted Nov 2, 2025 20:39 UTC (Sun) by intelfx (subscriber, #130118) [Link] (4 responses)

> Between Systemd, Wayland, and the push to put Rust everywhere, a lot of people using less popular software and hardware have gotten the rug pulled out from them lately.

I'm not sure what's there in common between systemd, Wayland, and Rust (besides the starkly emotional reaction of "these are all new things, and we don't like all the new things")?

So much for "the universal operating system"?

Posted Nov 2, 2025 22:01 UTC (Sun) by randomguy3 (subscriber, #71063) [Link] (3 responses)

I think the commonality is just the scope of impact - Wayland affects every desktop environment and every GUI program (although toolkits cushion a lot of that); systemd affects every package that provides a service; Rust is perhaps the odd one out as it doesn't intrinsically impact anything, but it is a systems-level language that is being used more and more in foundational software, and thus its impact is growing.

That said, the type of impact has been quite different. For Rust, it's very much about supported architectures, as with this article. For Wayland, it's about use-cases (such as the multi-monitor RDP setup mentioned elsewhere in the comments) - of course, part of the point of Wayland is to allow proper support for use-cases that X11 either can't support or can only do so in a hacky, unreliable way!

Systemd's inclusion in the set is a bit hard to justify imo - the main substantial complaint I've seen that isn't just "it's not what I'm used to" is about the refusal to support kernels other than Linux, fragmenting the wider ecosystem even as it unifies the Linux one.

So much for "the universal operating system"?

Posted Nov 3, 2025 6:11 UTC (Mon) by jmalcolm (subscriber, #8876) [Link] (2 responses)

The common thread is "if NewFangled does not support my software, my software does not work". Often, it manifests as, "you need to support NewFangled or SoftwareB does not work", which makes SoftwareB not work with your other software or hardware choices.

Wayland is the obvious example. There is software that requires Wayland to work. Major software is about to be added to that list. GNOME is the tip of the spear. The first really big wave will be when GTK5 stops supporting X11 because there will be many, many useful applications written in GTK5. If you want to use those, you will have to use Wayland. And if you use Wayland, there are MANY things you lose access to, including most obviously your favourite X11 window manager.

Systemd is going down the same road with software beginning to require Systemd to work properly.

And of course the issue with Rust is that the current compiler does not support that many architectures. If you want to use more niche hardware, you are not going to be running Rust code which locks you out of an ever growing universe of applications.

This is all in the service of defining a "Linux platform" which is less fragmented but also as a consequence less inclusive.

I do not resist any of this really. I have been Wayland exclusive for quite a long time. And I am a big fan of Rust. I use Systemd on many of my systems though I prefer the ones that do not.,

Some of this may get better. Chimera Linux may create Turnstile to allow distros to run software that requires Systemd without having to use Systemd. And there are efforts to compile Rust using the GCC back-end (restoring portability). And Wayland of course is trying to have a pretty good backwards compatibility story with both Xwayland and Wayback.

So much for "the universal operating system"?

Posted Nov 3, 2025 10:24 UTC (Mon) by taladar (subscriber, #68407) [Link] (1 responses)

Wayland really doesn't belong on this list at all because Wayland isn't a software, nor is it even a standard. It is a loosely defined group of standards that some pieces of software implement more or less identically. So it doesn't make sense to say Wayland support exists for anything specific really (as opposed to saying the abstract standard supports some abstract feature I mean).

What Wayland absolutely does not seem to support is certainly clear communication about the project and the standard which is honestly half the issue for those of us who have not yet invested the huge amount of effort it takes to even see if switching is possible for our use cases. The "Wayland is ready" communication we have heard for the last 15 years or so that was just plain wrong for the vast majority of that time has not helped the project gain trust.

So much for "the universal operating system"?

Posted Nov 3, 2025 13:58 UTC (Mon) by Wol (subscriber, #4433) [Link]

> Wayland really doesn't belong on this list at all because Wayland isn't a software, nor is it even a standard. It is a loosely defined group of standards that some pieces of software implement more or less identically.

And the difference between that and X is?

> The "Wayland is ready" communication we have heard for the last 15 years or so that was just plain wrong for the vast majority of that time has not helped the project gain trust.

And again, isn't that just Chinese Whispers? That hits EVERY major re-write? Like the KDE4 debacle? GTK3? Plasma? GTK4? Separate the devs from the fanbois, and you'll find the real devs probably hate the fanbois because they're promising the world and expecting other people to deliver.

Sorry mate, but that's life!

Cheers,
Wol

So much for "the universal operating system"?

Posted Nov 3, 2025 8:34 UTC (Mon) by jengelh (subscriber, #33263) [Link] (4 responses)

>[Systemd, Wayland, Rust, and, by extension, apt]
>I guess the alternative is to disagree with the direction and fork, but nobody's upset enough to do that, either.

I guess that is the realization that systemd was/is truly useful. And that, while they are all similarly-aged (systemd: 15, wayland: 17, rust: 13), the latter two's proliferation is not nearly as quick, which maybe is an indicator that the pain points of developers don't line up with the pain points of the users.

Meanwhile, this is a chance to get dnf or zypp or something running on Debian.

So much for "the universal operating system"?

Posted Nov 3, 2025 11:45 UTC (Mon) by taladar (subscriber, #68407) [Link] (3 responses)

I would say for a programming language Rust is spreading quite fast. However getting a programming language and its library ecosystem to a usable state naturally takes longer than a single daemon like systemd.

As for Wayland, they are just plain bad at reporting how usable their system is which makes it hard for the people who haven't switched yet like me to trust them on that, especially since even an attempt to switch is a huge effort there. Unlike systemd most people also don't use more than a single desktop system so trying it on another machine is not really an easier option either.

So much for "the universal operating system"?

Posted Nov 3, 2025 12:41 UTC (Mon) by kleptog (subscriber, #1183) [Link] (2 responses)

> a single daemon like systemd.

I'm not sure if this needs to be repeated, but systemd is far more than "a single daemon". There's about a dozen daemons in de base Debian package, and some of the related packages have several more.

Systemd won out primarily because it was so much better than what came before. Need need to play with init script/cron/anacron, instead a single config style that works for everything in a consistent reliable way. Deploying a program requiring a cronjob running as non-root is all sorts of fun, but in systemd It Just Works (tm).

So much for "the universal operating system"?

Posted Nov 3, 2025 13:19 UTC (Mon) by anselm (subscriber, #2796) [Link]

Deploying a program requiring a cronjob running as non-root is all sorts of fun, but in systemd It Just Works (tm).

The other thing is that systemd makes it reasonably straightforward to do all sorts of nifty stuff which the traditional setup never bothered with and which used to be such a royal PITA to retro-fit that few if any people ever actually did (and these people, inasmuch as they existed, weren't generally in charge of Linux distributions).

At this point, those people who still insist on not going anywhere near systemd are basically the Amish of the Linux world. It's great to live according to your own principles, and you're perfectly entitled to do so, but you'll have to get used to other people thinking your horse-drawn buggies are quaint as they zoom past in their air-conditioned electric cars.

So much for "the universal operating system"?

Posted Nov 4, 2025 9:00 UTC (Tue) by taladar (subscriber, #68407) [Link]

I know it is more than a single daemon, all I am saying is that you can't really compare the timeline for creating an entire programming language and library ecosystem for it to the point where it is fit for production use with a single, relatively contained project that only requires minor adjustments to most other projects that interact with it.

So much for "the universal operating system"?

Posted Nov 3, 2025 9:05 UTC (Mon) by jkingweb (subscriber, #113039) [Link] (1 responses)

> In this case, it's not like the platforms are getting banned or removed, there's just nobody doing the work to keep them working.

"Nobody" is an exaggeration, even. Wayback and gcc-rs are being worked on to offer upgrade paths, and there's even still regular releases of sysvinit with genuine improvements, if that's your thing.

The old stuff is definitely not as well supported as it once was, and sometimes that means unfortunate developments like this, but there are still people who both care and put in the effort.

So much for "the universal operating system"?

Posted Nov 3, 2025 11:46 UTC (Mon) by taladar (subscriber, #68407) [Link]

"Nobody" in this context means that the amount of work necessary far outweighs the amount of people willing to do the work, not literally nobody.

So much for "the universal operating system"?

Posted Nov 2, 2025 22:58 UTC (Sun) by Vorpal (guest, #136011) [Link] (6 responses)

Most of those architectures that have been dropped or are on the chopping block were still active (or at least only recently retired) back when support was added. It made sense then. Less so now.

If you look back then, they weren't trying to run Linux on a vacuum tube computer. That would be the equivalent in age of the system...

They weren't even trying to run Linux on a C64, which was far newer and more capable. In the early 90s that would have been around 10 years old. And you can certainly run Linux on a 10 year old computer today. No problem.

So we actually have far better hardware support for old computers these days.

So much for "the universal operating system"?

Posted Nov 3, 2025 1:10 UTC (Mon) by pizza (subscriber, #46) [Link] (5 responses)

> Most of those architectures that have been dropped or are on the chopping block were still active (or at least only recently retired) back when support was added. It made sense then. Less so now.

These folks of yesteryear were trying to run Free Software on the common-for-the-era hardware [and perhaps OSes] they already owned or otherwise had access to. Often it was a generation or two behind the curve, which made it cheap to acquire if not effectively free. (for example my first foray into Linux was using a pre-VLB 486-33 motherboard + RAM that I intercepted on its way to a dumpster, using a hard drive with failing bearings that was so old that it lacked support for DMA..)

> So we actually have far better hardware support for old computers these days.

Absolutely. Though I'd argue that's due to the so-called Wintel monopoly that resulted in an extremely stable base platform. (eg BIOS, then BIOS+ACPI, then UEFI+ACPI, all built on top of a series of CPUs that to this day start up in a mode+instruction set that's been around since 1978)

So much for "the universal operating system"?

Posted Nov 3, 2025 16:48 UTC (Mon) by rgmoore (✭ supporter ✭, #75) [Link] (3 responses)

Though I'd argue that's due to the so-called Wintel monopoly that resulted in an extremely stable base platform.
While it's true that Wintel has been at the center of the current stability, there were several platforms that came out at the same time that could have played a similar role had things turned out differently. The most obvious is the m68k series people are upset about support being dropped for; if Motorola had continued to develop it instead of moving to PowerPC, maybe it would still be a serious player.

So much for "the universal operating system"?

Posted Nov 3, 2025 17:15 UTC (Mon) by farnz (subscriber, #17727) [Link] (2 responses)

I suspect that, had things played out completely differently, PReP and its descendant CHRP would have filled the "stable base platform" role; IBM and Motorola both had a huge incentive to make it easy for people to buy their chips, and Apple was dabbling in the Macintosh clone market at this time, and would thus have an incentive to push people towards a stable base platform, too.

So much for "the universal operating system"?

Posted Nov 5, 2025 9:41 UTC (Wed) by geert (subscriber, #98403) [Link] (1 responses)

Then Steve Jobs returned to Apple, and killed the Mac clone program.
Other manufacturers who were ready to sell or were already selling CHRP machines didn't believe there was a future for their machines running Linux, so CHRP died (outside the IBM AIX universum).

Alternate history needed for CHRP to succeed

Posted Nov 5, 2025 11:06 UTC (Wed) by farnz (subscriber, #17727) [Link]

There's three things that would have had to be different for CHRP to succeed (hence me saying "had things played out completely differently"):
  1. All the OSes people wanted to run at the time would have to have been cheaper on CHRP than on other platforms like x86 and SPARC; Windows NT (both Workstation and Server), Netware, OS/2, Solaris etc would all have to have been cheaper to run on CHRP than on any other hardware, not just AIX and Mac OS.
  2. None of the OS vendors could have abandoned selling their OSes for arbitrary CHRP hardware even as CHRP hardware became cheaper than their "normal" hardware; Apple and Sun would have had to transition from being integrated hardware and software vendors to just selling software.
  3. The cheap x86 clone makers would have had to offer CHRP hardware as cheaply as x86, forcing AMD and Intel to switch to making PowerPC chips in order to survive.

Without all three of those, there's a good chance that x86 would simply be cheaper over time.

So much for "the universal operating system"?

Posted Nov 5, 2025 19:36 UTC (Wed) by jmalcolm (subscriber, #8876) [Link]

> the so-called Wintel monopoly that resulted in an extremely stable base platform

What is amazing is that the same platform has been commercially dominant for so long. Software trying to target the absolute latest desktops compiles down to instructions that have not changed materially in almost 20 years. A Linux distro meant to be installed on a 2025 desktop computer ships with binaries that run just as natively on my 2008 iMac.

> BIOS, then BIOS+ACPI, then UEFI+ACPI, all built on top of a series of CPUs that to this day start up in a mode+instruction set that's been around since 1978

This is not why modern software runs though. All of those things are more about why ancient software continues to run on modern hardware. You can still run DOS 3.3 on a desktop purchased recently. You need a BIOS for that. And of course you need real-mode, including that instruction set that has been around since 1978. But I do not need any of that to run a modern Linux distro on that 2008 iMac. No BIOS or x86 instructions required for that. I can run my modern Linux distro on my 2008 iMac because even back then it had an x86-64 CPU in it and that architecture has barely changed.

The reason it has not changed is not backwards compatibility though. It is that the ISA already did everything almost all modern software needs. Our computers are much faster but they do not really do anything that our old ones did not. Sure there are the AVX instructions. But very little software needs those. They are typically optional and detected at run time.

> my first foray into Linux was using a pre-VLB 486-33 motherboard

That hardware is no longer supported by the modern mainline Linux kernel. It has a BIOS. It has the instruction set that goes back to 1978. What it does not have is the x86-64 instruction set (or hardware features). And modern software requires those features.

The most obvious problem is RAM. That 486-33 could only "theoretically" support up to 4 GB of RAM. But in practice the most the motherboard was probably capable of would be 64 MB or even less. But there are many other features and instructions that modern software requires that that 486 simply lacks.

And Linux never ran on anything much older than that. The 486DX33 was released in 1990. The very f5irst 386 computer (first 32 bit x86) came out only 5 years earlier in 1985. And the 386 was a simply massive evolution over the first PC that appeared just 5 years before that.

It is pretty amazing really. The 386 was released within 5 years of the first 8088 based PC. The difference between those two worlds was titanic and the software showed it. We went from CP/ M and DOS to Windows 95, Windows NT, and Linux. It then took 18 years, 1985 to 2003, for AMD to release amd64 (x86-64). The change in software that resulted was a lot less dramatic. Hardware supported virtualization would be the biggest thing I guess. And now it has been over 20 years of x86-64 and not only has there been little change but there it really does not seem like there is much coming on the horizon. In fact, while x86 swept away essentially all competing architectures, what we are seeing now is 64 bit competition with the same feature set threatening to take x86-64 market share away (ARM and RISC-V).

One of the great benefits of Open Source

Posted Nov 3, 2025 6:45 UTC (Mon) by jmalcolm (subscriber, #8876) [Link] (101 responses)

> It's important for the project as whole to be able to move forward and rely on modern tools and technologies
> and not be held back by trying to shoehorn modern software on retro computing devices.

Being able to run modern software on old hardware is one of the great benefits of Open Source in my mind.

I regularly use hardware dating back to 2008 or so and I do not mean that I use it for "retro" reasons. This hardware is still useful precisely because I can run up-to-the-minute modern software. What the hardware can do is often a product of what the software can do. If I can run modern software, older hardware can be far more useful and capable today than it was when it was new. At least, that is my view.

Older hardware often has better keyboards and nicer screens (to my eyes). If I have enough RAM, I find CPU does not matter that much for many tasks. I would still prefer 32 bit for many things if it were not for the RAM limit that would impose.

A 2008 Intel based Linux machine can run Docker. And it can run Claude in a browser. It can run Metasploit, or Python, or Terraform. It can take a video call. I can use it to post this comment. I may have to set my video to transcode into AV1 overnight...but it can do it.

You can build a pretty decent home lab with nothing more than Proxmox and a 2013 Mac Pro.

What I love about Open Source is that old hardware does not have to be retro at all but can instead remain perfectly useful.

So, it is sad to see hardware become unsupported. But I do not want any of that modern software being "held back" either. I know the day will come. But Open Source generally pushes that day back a few years. Perhaps Rust on GCC will someday enable the other Debian "ports" to compile the latest version of APT and keep trucking.

One of the great benefits of Open Source

Posted Nov 3, 2025 8:18 UTC (Mon) by anselm (subscriber, #2796) [Link] (99 responses)

A 2008 Intel based Linux machine can run Docker. And it can run Claude in a browser. It can run Metasploit, or Python, or Terraform. It can take a video call. I can use it to post this comment. I may have to set my video to transcode into AV1 overnight...but it can do it.

Sure, but … A 2008 Intel-based Linux machine probably runs amd64 code and is therefore not that far removed from current hardware (modulo speed and miscellaneous non-essential instruction set extensions). There's a difference between that and an Atari ST or a DEC Alpha, which are platforms which are no longer in general use and require a completely different setup if you're a Linux distributor. 2008 Intel-based Linux machines aren't exactly retro-computing.

One of the great benefits of Open Source

Posted Nov 3, 2025 11:36 UTC (Mon) by pbonzini (subscriber, #60935) [Link] (2 responses)

Plus the really insane phase of Moore's law/Denmark scaling was around the Pentium to Pentium 4 period (1995 to 2002 say) when it felt like computers based on x86 processors became obsolete in a year or two. Alpha is from around that era, but m68k is before that. The 68060 is microarchitecture-wise comparable with the Pentium, and with worse clock speeds at that.

One of the great benefits of Open Source

Posted Nov 5, 2025 9:45 UTC (Wed) by geert (subscriber, #98403) [Link]

New computers used to become twice as fast and large (RAM/storage) in 18 months. After 4 or 5 years, they were obsolete.
In sharp contrast, my current desktop has only 50% more RAM than the 7-year old machine it replaced, which is my absolute low record of RAM size increase.

Dennard scaling

Posted Nov 14, 2025 0:44 UTC (Fri) by jrincayc (guest, #29129) [Link]

Dennard (not Denmark) scaling was awesome while it lasted from ~1970 to ~2006, since each individual transistor got both faster and used less power when the size decreased. Now transistors keep getting smaller, but they use roughly the same amount of power and are about the same speed, so individual CPU cores are not improving much. In relation to the article, it means that hardware goes obsolete a lot slower than it used to.

One of the great benefits of Open Source

Posted Nov 3, 2025 13:15 UTC (Mon) by ms-tg (subscriber, #89231) [Link] (95 responses)

Yeah. I suspect I am not alone in finding it amazing/shocking how much energy seems to surround m68k, alpha, etc.

Having used Linux in one form or another since around 1994, I’ve yet to meet someone actually using one of these architectures.

Why does discussion of their support level so dominate discussion of Rust adoption? With this level of passion evident in the discourse, is there no passionate hacker working to get m68k support working “well enough” in rustc to sidestep this obstacle?

I ask because my understanding has been that while these architectures are “supported” in some sense, a lot of software doesn’t actually work on them, so the bar isn’t terribly high here - apologies if that’s a misunderstanding and please be welcome to correct.

One of the great benefits of Open Source

Posted Nov 3, 2025 14:01 UTC (Mon) by Wol (subscriber, #4433) [Link]

> Yeah. I suspect I am not alone in finding it amazing/shocking how much energy seems to surround m68k, alpha, etc.

Along with the 32032, I suspect the reason is that they had clean, well-designed , logical instruction sets.

That said, that was probably true of the Intel 4004 back in the day, and is probably no longer true for the "current" versions of the m68k etc. "It's nice and clean to program for" will attract a following ...

Cheers,
Wol

One of the great benefits of Open Source

Posted Nov 3, 2025 15:06 UTC (Mon) by farnz (subscriber, #17727) [Link] (92 responses)

I see a couple of things in the people I know who use these orphaned architectures:
  1. These were the "dream machines" of their younger days - and it's very cool that the machine you lusted after but could not afford at 23 is now your daily driver. This sort of nostalgia drives things like people still working on RISC OS, but also people wanting the latest Debian to work on their machine.
  2. You're responsible for maintaining long-term supported hardware based on these CPUs, and you don't want to be stuck with ancient software that only you maintain when you can spread the load; if you're responsible for an industrial system that's Internet-connected (so "just freeze the environment and use old software" isn't an option), you really want to share the workload if you at all can, because maintaining everything means justifying a bigger team to your management (or burning yourself out).

And note that there are passionate hackers working to get m68k support working "well enough" - for example, m68k on Linux is a Tier 3 supported target for Rust because Adrian Glaubitz and Ricky Taylor stepped up to do the hard work of making it happen.

Part of the problem is that for every Glaubitz or Taylor saying "it'll happen, but hold off for a while because I'm working on it and it takes time to get a big change like this done and merged upstream", there's 10 noisy people who aren't doing the work, and are saying "hold off until someone else does the work for me".

One of the great benefits of Open Source

Posted Nov 3, 2025 16:28 UTC (Mon) by MortenSickel (subscriber, #3238) [Link]

I was picking up some old machines from work back around 2000, so I have had both a digital alpha box that initially ran WinNT, some sparc stations and an HPUX box running linux in my basement. Fun days, but I do not regret getting rid of those machines.

One of the great benefits of Open Source

Posted Nov 3, 2025 18:12 UTC (Mon) by rgmoore (✭ supporter ✭, #75) [Link] (90 responses)

Part of the problem is that for every Glaubitz or Taylor saying "it'll happen, but hold off for a while because I'm working on it and it takes time to get a big change like this done and merged upstream", there's 10 noisy people who aren't doing the work, and are saying "hold off until someone else does the work for me".
Maybe more troubling is the group that seems to want to freeze technology the way it was when their favorite system was in its heyday. I get that feeling about a lot of the people who resisted systemd; they really feel like SysV init was good enough for a long time, so we should stick with it forever. There seems to be a similar feeling about Rust. I'm sure the feeling that the systems they care about are getting left behind contributes to the attitude.

One of the great benefits of Open Source

Posted Nov 4, 2025 9:12 UTC (Tue) by taladar (subscriber, #68407) [Link] (88 responses)

I get the impression that a significant part of the Linux kernel developers fall into this camp as far as C is concerned. They consider it the pinnacle of low level programming and never want to consider anything else.

The same goes for their email based workflows.

To me it is just weird to think that any technology from the 1970s, i.e. a mere 20-30 years after the entire field was invented and with vastly different hardware, security environment, regulatory requirements,... than today should be considered perfect and something we can never improve upon.

One of the great benefits of Open Source

Posted Nov 4, 2025 10:25 UTC (Tue) by sthibaul (✭ supporter ✭, #54477) [Link] (87 responses)

> To me it is just weird to think that any technology from the 1970s, i.e. a mere 20-30 years after the entire field was invented and with vastly different hardware, security environment, regulatory requirements,... than today should be considered perfect and something we can never improve upon.

mail and C nowadays have precisely been vastly improved upon over what they were in the 70s.

One of the great benefits of Open Source

Posted Nov 5, 2025 8:40 UTC (Wed) by taladar (subscriber, #68407) [Link] (86 responses)

And systems that weren't hampered by backwards compatibility with things that frankly haven't mattered in decades and weren't written based on 1970s assumptions have improved orders of magnitude more.

Just look at mail, we still don't have a reliable system in use everywhere that works for all the use cases to determine who actually sent a mail. DKIM, SPF, DMARC, ARC,... are all just patch work on top of a bad starting point and even with them use cases like forwarding and mailing lists don't work properly all the time. Not to mention that 90% of mail servers out there don't implement all of them. There is no end to end encryption, there is no authorization of contacts, mail uses half a dozen different encodings that literally nobody else uses and many implementations break some of the standard (e.g. the 1000 character a line limit) even though other parts (e.g. DKIM) require them (well, DKIM breaks at lines over 4096 IIRC).

And C is even worse in how much of the single core assumptions are baked directly into the model and how many things are left undefined or implementation defined in the standard because some compiler vendors in the 1980s couldn't agree on something.

One of the great benefits of Open Source

Posted Nov 5, 2025 11:13 UTC (Wed) by pizza (subscriber, #46) [Link]

> And systems that weren't hampered by backwards compatibility with things that frankly haven't mattered in decades and weren't written based on 1970s assumptions have improved orders of magnitude more.

if by "improved" you mean "designed to be a walled garden owned/controlled by a single vendor that never interoperated with anything else" then sure..

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 5, 2025 15:31 UTC (Wed) by dskoll (subscriber, #1630) [Link] (84 responses)

It's impossible to have the benefits of email:

  • Quick communication with someone you've never met, with no prior setup.
  • No centralized authorization needed.
  • Interoperability.
  • Ability for computers, printers, etc. to send automated messages without fuss

without having to live with the downsides. I think it's a reasonable tradeoff; email security tools have gotten to the point where email is still useful and isn't totally overwhelmed with spam.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 5, 2025 16:03 UTC (Wed) by paulj (subscriber, #341) [Link] (76 responses)

It is possible. Spam can be solved by requiring the sender to expend some relatively small resource. E.g., compute (see the olde hash-cash idea), or money (and we have decentralised, programmable money today). The expended resource can be made small enough that the cost is bearable for the 9x-th percentile of users of the system, while making large-scale abuse inordinately expensive.

It's perfectly doable.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 5, 2025 17:03 UTC (Wed) by dskoll (subscriber, #1630) [Link] (75 responses)

People who say it's perfectly doable never seem to have read this.

What's in it for me to spend money to send email? More to the point, what's in it for me to pay for emails sent out by my cron jobs or monitoring systems?

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 5, 2025 17:06 UTC (Wed) by dskoll (subscriber, #1630) [Link] (1 responses)

Sorry, another followup. Another thing is you're not thinking like a criminal. A criminal won't be deterred by micro-payments or by having to use compute because they'll just steal those things from innocent victims. We already see compromised devices being massively weaponised in botnets. A botnet has vastly more computing power than you'll ever need to break through proof-of-work anti-spam systems.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 11:29 UTC (Thu) by paulj (subscriber, #341) [Link]

But then the economic value of the resource (the compute power, or the micro-payment currency value) outweighs the economic benefit of sending the spam. The criminal will just use the stolen resource directly - not send spam.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 11:26 UTC (Thu) by paulj (subscriber, #341) [Link] (72 responses)

Essentially everyone pays for email. The vast, vast majority pay for email by selling their eyeballs and their data to big-tech to be served ads (and last number of years, have their data slurped into AI training potentially). A small number pay specific email providers for hosting. A tiny, tiny fraction - effectively 0 people relative to all email users - go to the trouble of running and maintaining their own personal email infrastructure. Even amongst old tech nerds - the kind where a large fraction had their own little server running an SMTP server in the 90s and 00s - the vast vast majority now /pay/ for their email, one way or another.

Further, your comment above seemed (to me), to frame the problem more generally than just SMTP and seemed to refer to communication in general, that you could not have a series of benefits without the downsides (i.e., the various complex hack-on side-protocols to limit spam).

In general, we /do/ have a way. The technical ability is there.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 11:41 UTC (Thu) by farnz (subscriber, #17727) [Link] (62 responses)

The technology may be there, but the reason e-mail is still so widely deployed is social, not technological, and your proposal (like many) doesn't answer a lot of social questions, not least "why should I, as someone who is well-served by the current state of affairs, switch?".

It's worth noting, in that context, that the thing that keeps killing micropayments is the cost of handling disputes; if I've been hacked, and the hacker has spent $10k of my money, it's well worth my while financially disputing all of those payments and getting as many undone as I can (via the courts if there's no dispute mechanism to avoid that), but each of the recipients has (relatively speaking) much less incentive to not just give in and let me get back my $0.001 or whatever that was spent with them. This essential asymmetry has to be addressed somehow - unless you're saying that people who get hacked "deserve" to lose large sums of money, of course.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 12:05 UTC (Thu) by paulj (subscriber, #341) [Link] (61 responses)

Yes, everything else is "social".

The question of theft of money, and what happens if the thief is caught but has already spent much of the money, and whether or not that money can be recovered from those who received it, is also a social one. And a question I'm sure long existed before computer, and one which already has a body of judicial decisions available to cover it (no idea what they are). If a thief spends X thousands of money stolen from me with some 3rd party, but that 3rd party acted in good faith and had no reason to believe the money was stolen, may I recover that money from that 3rd party? Leaving the 3rd party out of pocket?

I have no idea what the law says. I assume the answer is context and probably jurisdiction dependent. I doubt the issues are any different for computerised micro-payments (?).

Anyway, social questions, largely.

Note: Some common technological micro-payments are "transparent" - the transactions can easily be traced on a public electronic ledger, and it may therefore be easy for criminal investigators to find that a thief paid some (innocent) retailer X thousands, even if they never identify the thief. However, the CryptoNote paper exists, and I think in the future many (most?) micro-payments will use non-trivially-traceable ledgers. (It already is the case for some sectors that use distributed, online micro-payments).

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 12:20 UTC (Thu) by farnz (subscriber, #17727) [Link] (60 responses)

The law in my jurisdiction has a solution, but it trashes micropayments; the third party is required to return the money to me, and is out of pocket until they can identify the thief. They can recover their full costs (not just the money they returned to me, but all of the costs they incurred that way) from the thief once they've found the thief, but only if the thief actually has legitimate assets you can recover from. There are exceptions for someone acting as my agent (which cover banks, for example), but those exceptions require that the agent is appropriately licensed, and that we've got a pre-existing contractual relationship which agrees that you're acting as my agent.

If you refuse to return the money, then you have yourself got into the realms of criminal activity. And as someone out $10,000, it's worth my while chasing all of it to get as much back as I can - but is it worth your while getting a criminal record over $0.001? If your answer to that is "no, I'd just return it to avoid this outcome", then you have ensured that spammers don't pay (since they use stolen resources), which destroys the economic incentives.

And it's this sort of social problem that you have to solve in order to make an e-mail replacement workable. The technology problems are trivial in comparison.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 14:09 UTC (Thu) by paulj (subscriber, #341) [Link] (59 responses)

That is just law on property. I don't see any special bearing on micro-payments.

Whether you hand me €x of stolen money or you send me €x worth of stolen micro-payment, by the law as you describe it, once I am made aware it was stolen I have to give it back to you. I'm not sure how the amount is that significant either - your argument seems to be cause the sum is miniscule it changes something. But...

If you lost €0.001 worth of micro-payments, are you going to bother tracking down where it went, finding out who is behind whatever address it went to, contacting them, etc. It's not worth your time. Also again:

> I think in the future many (most?) micro-payments will use non-trivially-traceable ledgers. (It already is the case for some sectors that use distributed, online micro-payments).

The micro-payment system can easily be one with a non-transparent ledger and distributed - no central authority that can look behind any curtain. In technical terms, your objections do not hold. A distributed, decentralised, permissionless, communication system can be constructed that puts a sufficiently high cost on spam to deter it, while incurring only trivial costs for nearly all users.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 14:28 UTC (Thu) by farnz (subscriber, #17727) [Link] (58 responses)

The reason it's a problem is that I lost €10,000 in total, as the hacker used my account to send ten million messages - it's worth my while chasing that much up.

But each of the recipients owes me a much smaller amount - how much time and effort are they going to put in to avoid me fraudulently getting a refund of €0.001? If the answer is "none", then spammers just demand refunds, and get them because you're not willing to put in any effort to stop me getting one fraudulently. If the answer is not "none", then you're risking a criminal conviction over what to me is €10,000, and to you is €0.001; how much time are you willing to put into defending yourself here, and why aren't you putting that time in to stopping spam already?

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 14:55 UTC (Thu) by paulj (subscriber, #341) [Link] (57 responses)

Your objection is, at best, a social one.

In technical terms though: How do you know who received your micro-payments? The communication system consists of anonymous nodes, by design, precisely to avoid the problems you are trying to create for it.

1 The communication system can be designed to consist of relatively anonymous nodes, incentivised to provide service by the micro-payments
2 The micro-payments system can be designed to resist traceability
3 The distribution of the micro-payments by senders to the nodes of the communication system carrying out the work can be anonymised (e.g., see 2) and/or diffused such that even if 1 were not true - nodes were not anonymous - that it is not possible for you nor any general observer, nor even the recipient can know where a payment was sent from or to (obviously a recipient can know a payment went to it, but that's all).

This technology exists, there are examples of all the pieces of this system and of some combinations of the pieces, in various applications. Some pieces are very widely used (e.g. an implementation of 2 is the dominant, universal even, form of payment system in some sectors). I don't know if there's a communication system based on all the elements of this model, but it can be.

There is Session messenger (getsession.org). It doesn't use micro-payments as of yet. It may do one day, if use gets big enough and spam / resource-abuse becomes an issue.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 14:58 UTC (Thu) by farnz (subscriber, #17727) [Link] (56 responses)

So what you're saying is that if I'm hacked, the hacker can drain my accounts completely, leaving me destitute? Why would I sign up for this system over the existing e-mail system?

Once again, this isn't about the technology - the technology exists, and can be made to work. It's about the social aspect; you're saying now that if I don't take a lot of care to avoid being hacked, I can lose all my money and have no recourse. That's not exactly a selling point of any system.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 15:28 UTC (Thu) by paulj (subscriber, #341) [Link] (55 responses)

Yes, if your computer is hacked you potentially may have all your accounts drained. This in fact happens regularly, sadly. This has absolutely nothing to do with micro-payments specifically.

Indeed, micro-payment systems tend to have better security features than the common banking system does.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 16:33 UTC (Thu) by farnz (subscriber, #17727) [Link] (54 responses)

Sure, but now you're telling me that, if I want to do e-mail, I have to expose my account details (which can thus be drained) to my e-mail system. That's a new avenue of attack, and given the amount of e-mail I send anyway, is one where I'm much more likely to not notice that One Weird Transaction that drains my accounts (because a transaction on every e-mail send is normal).

And again, as a social matter "the existing thing exposes you to risk, this thing means that it's harder to not expose yourself to that risk" is not a selling point. Unless I can completely remove myself from the existing thing (so no common banking system at all, for any purpose, including things like groceries), you're saying that I should accept more risk to make this thing happen; that is always going to be a hard sell.

Note, too, that the "common banking system" (at least here) is set up such that all transactions can be reversed, because I can, if the bank doesn't handle it internally, get a court order forcing the transaction to be reversed. That's my big security feature - none of my outgoing transactions are irreversible, if I'm willing to put the legwork in to have them reversed. I've not seen a micro-payment system with a similar guarantee of reversibility.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 17:15 UTC (Thu) by paulj (subscriber, #341) [Link] (53 responses)

You can design the system so that payment to the communication system for your use is separate from sending communications through that system. I.e., you use your micro-payment software to send a payment to the communication system. You use your communication system client software to send messages. Your client can have a key that identifies it as associated with whatever balance, but without any control of that balance (the communication system controls it, at that point).

There is little difference here - from my perspective - whether I use a credit card to make a payment from my normal bank account to my email provider, or whether I use a distributed, electronic payment system to make a payment to the same entity (there are numerous email hosting providers who accept both credit/debit cards and other non-fiat-money payment systems).

The design you're floating - with your email client somehow having full control over any balances (never mind significant) - seems somewhat insane, and so of course it's not how these things are designed, whether if it's with standard centralised payments systems, or more distributed, decentralised payment systems. ;)

The decentralised payment version can allow for things like recoverable balances. E.g., if I've made x amount available to top-up my balance with the communication system, that could be done by paying to a 1-of-2 multisig so that myself and the communication system can pay out from the balance. Which means I can take the balance back into my full control. With standard payments, if a communications provider goes bust, I will not get my balance back from the company. I'll have to wait for a receiver to come in, take control, and disburse my funds back. The distributed system, I can take my balance back, plus disbursement by the distributed system to some node can itself by protected by a wider consensus that the said node actually did some work to (help) send the message(s).

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 17:19 UTC (Thu) by farnz (subscriber, #17727) [Link] (48 responses)

You're making it hard to send e-mail, then.

Today, I compose the e-mail, I hit send, it goes. Job done.

In your system, you're suggesting that I compose the e-mail, I hit send, I get a prompt to go into my payments system to approve a top-up to the e-mail system, I have to go across to that, check that the top-up is reasonable, and permit it, and then it goes.

And I cannot square your talk about being able to recover the payment made for an e-mail that was sent and received by the recipient (but declared as spam) with the idea that this payment is a deterrent to spamming. Either it's irreversible (in which case, that's a whole new set of risks that isn't present in the current system), or I can have it reversed if I didn't send the e-mail personally, and e-mail is effectively free to criminals (since they hack systems, and their victims reverse the payments).

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 10:30 UTC (Fri) by paulj (subscriber, #341) [Link] (47 responses)

No, that's not what I'm proposing.

I think you know fine well that paying for an online service does not imply that you then must manually take actions to pay at each and every use. You could pay in batches in advance - one very common model. Even LWN uses that! You need not even pay yourself. If ads make money for big tech, they'll continue to let you just pay with your eyeballs and data. Etc.

How the system itself manages distribution of payments does not of itself have to govern anything about what users do.

Anyway... this is a long side track away from topic of the story.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 11:03 UTC (Fri) by farnz (subscriber, #17727) [Link] (46 responses)

If I pay in batches in advance, and I happen to run out of credit just as I send e-mail, how do I top-up without paying more? If I can send on credit, and top-up later, why wouldn't a spammer send on credit, and "forget" to top-up? Similar with big tech; if they're letting me pay with eyeballs and data, why wouldn't a spammer create many new accounts that they can use to send spam (as they already do today)?

You're looking only at the happy path, and saying "as long as this all works as intended, there's no problems". I'm looking at the edge cases, like "running out of credit just as you send an important e-mail", and asking how you solve that.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 13:21 UTC (Fri) by paulj (subscriber, #341) [Link] (45 responses)

Again: If you don't want to care about having to top up some balance, just let advertisers and data-miners do the paying for you. If you're happy with that, go for it. Otherwise, you need to pay - various models are possible, from pay at send to batch-pay in advance.

Knowing when you need top up some balance for some service is just a general life thing, and has 0 specifically to do with online micro-payments. I irregularly use the train, and more than once I've been at my local station furiously typing CCV codes into my phone app to try get my "Leap" (mifare I think) card topped up, so I can tap in at the gate, as the train is approaching....

You're just trolling at this stage I feel. ;)

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 13:47 UTC (Fri) by farnz (subscriber, #17727) [Link] (38 responses)

So, we come back round to why would anyone switch to a pay-to-mail system?

Big tech are happy enough with the current SMTP + DMARC setup; it works for their needs, and they have no need to change it. What makes it worth their while contributing some of their profits to a third party?

And you're continuing to miss the point - you've added an extra way for me to lose my money, for no gain to me over the current system (SMTP + DMARC with a decent spam filter is very low on junk for me already, and the pain of dealing with disputes over money paid for delivery of mail to me would outweigh any reasonable payment).

I also note, now that I recall the previous conversation, that you never responded to this comment thread from over 2 years ago - did you actually receive the money I sent, or did it go missing? If it went missing, how do we dispute the transaction and ensure that it gets to either you or back to me?

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 14:43 UTC (Fri) by paulj (subscriber, #341) [Link] (5 responses)

I have no idea why you would switch. That's not a technical issue, but a social one - around network effects, etc.

You currently already can lose your money if your computer is hacked, if you pay for email. If you're happy to sell your eyeballs and data, then in any new, (not email!) messaging system that used some online, distributed payments system to make spam uneconomic, you could use clients that sold your eyeballs and data to big-tech and *you* would no additional payment systems on your computer/client. Even if you chose to pay, it still need /not/ be an extra risk, because you may well be using this payment system for numerous things already.

As for your experience with the current email system and lack of spam, that's cause of a layer of crappy additional side-protocols which *still do not substantially stop spam* PLUS a filtering system to try separate out the deluge of spam that _still gets through_. All of which you _ALREADY PAY FOR_ - one way or another.

It's a _shit_ system. It _does not work_ - not even the big-tech companies manage to reliably stop spam by any means, and also do not manage to reliably separate the spam from the signal. There are regular false-positives, and many false-negatives in my Big-tech administered Inbox.

As for the 2yo comment. I never saw that somehow, till now. Or I saw meant, meant to check later and forgot! I'll try remember today :)

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 15:36 UTC (Fri) by farnz (subscriber, #17727) [Link] (4 responses)

It works well enough for most use cases - any new system has to have good reason to switch.

And we know from SMS (which is charged per-message to the sending companies, even if you buy a bulk lot from a provider like Trello) that charging isn't enough to reliably stop spam, either; there's ways to get around charging, including outright fraud. From what you've described, you're going to recreate the problems SMS has, which include spam and financial problems, in order to get rid of the problems e-mail has; but then, why would I use the new protocol, and not SMS?

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 16:04 UTC (Fri) by paulj (subscriber, #341) [Link] (3 responses)

> there's ways to get around charging

So.... there often wasn't charging is what you're saying.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 16:08 UTC (Fri) by farnz (subscriber, #17727) [Link] (2 responses)

No; there was charging done, but then fraud and other criminal activity meant that the money didn't actually transfer as intended, or the charges were undone by court order.

The "charges undone by court order" is impossible to avoid without making your payment system in breach of anti money laundering regulations, and therefore illegal to use at scale.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 10, 2025 11:51 UTC (Mon) by paulj (subscriber, #341) [Link] (1 responses)

> The "charges undone by court order" is impossible to avoid without making your payment system in breach of anti money laundering regulations, and therefore illegal to use at scale.

I'm no legal expert, but the existence of (on-chain) irreversible distributed payment systems and businesses created around them and/or using them (including very large and some heavily regulated ones) shows your belief here can not be true. The on-chain transaction can not be reversed, once confirmed, but businesses can always refund - by choice or legal order - some payment.

AFAIK, the likes of the EU are not trying to ban irreversible distributed payment systems.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 10, 2025 12:07 UTC (Mon) by farnz (subscriber, #17727) [Link]

You don't need to change the ledger - it's entirely allowable to have the original transaction in the ledger, and a later transaction that reverses the full effect of that previous transaction.

What is not legal is a setup where the money can neither be retrieved directly by the sender, nor can the recipient be identified for the purposes of having the court order apply to them, too. Otherwise, how do you prove (as required by Russian, Chinese, EU and USA sanctions laws) that you're not sending money to a sanctioned entity directly?

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 14:56 UTC (Fri) by pizza (subscriber, #46) [Link] (31 responses)

> And you're continuing to miss the point - you've added an extra way for me to lose my money, for no gain to me over the current system (SMTP + DMARC with a decent spam filter is very low on junk for me already, and the pain of dealing with disputes over money paid for delivery of mail to me would outweigh any reasonable payment).

There's one more wrinkle, and it's a doozey. When money is involved, (or are otherwise exchanging some measure of "value" for a service) you're veering into the territory of [potentially heavily-]regulated commercial activities in most jurisdictions, and now you have to care about recordkeeping, paying taxes, etc. Not to mention any gateway to "real" payment systems will have their own voluminous technical+contractual requirements, etc etc.

> So, we come back round to why would anyone switch to a pay-to-mail system?

Especially to one that presupposes the existence of a spherical cow (==functioning micropayment system that's universally deployed.. with bidirectional transfers into arbitrary national currencies)

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 15:58 UTC (Fri) by paulj (subscriber, #341) [Link] (30 responses)

I don't presuppose anything about social, regulatory or other non-technical issues.

I'm just saying, from a technical perspective, the original assertion that a set of constraints and desires for messaging systems was impossible to meet, other than without the specified downsides of current email.

Technically we have distributed protocols that can achieve the desired goals. Most of the issues raised in objection are just moot, as they apply to wider structures in use in society, long before computers. Valid objections are generally social, e.g. those in your reply.

On your specific points, vast vast majority of email providers are companies and already taking payment from /someone/ (whether the email sender, or the advertisers who want to place ads before the email senders). All the regulatory burdens are already there in that particular implementation of a messaging system, for essentially all entities involved in the operation of the underlying messaging system (the number of entities that are not corporations is pretty much 0 by comparison to the rest). Further, being involved in some miniscule way in the operation of a messaging system that uses micro-payments need not have regulatory or tax implications - most regimes have thresholds to exclude trivial cases from tax or regulatory burdens.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 18:01 UTC (Fri) by anselm (subscriber, #2796) [Link] (29 responses)

vast vast majority of email providers are companies and already taking payment from /someone/ (whether the email sender, or the advertisers who want to place ads before the email senders).

That may be the case, but today I am the email provider for myself and a few friends, and I would like to keep doing this. If, to continue to be my own email provider, I would have to connect to some payment system and deal with all the legal red tape required to be a commercial entity (and at least around here, “doing something on a sustained basis that involves other people and money” is the basic definition of “being a commercial entity”), then this would no longer be a viable proposition, and that would really suck. It may turn out that in the end I might not actually be liable to pay taxes, etc., but the red tape would still be there in order to get to that point.

Anyway, never mind micropayments, which are way too much of a hassle to be worthwhile. If we really want to fix email, the first thing to do is to stop sending email around on the off-chance. Instead, the email is stored at the sender's end and the receiver is notified that there is some email to pick up for them. The receiver can then decide whether they want it (based on whether the sender is on a list of approved senders, or the notification has the correct signature, or the hash for the actual mail doesn't show up in a spam database, or whatever) and pick it up from the sender's server if that is the case. This approach makes it harder for spammers to fake the sender's address (they could still try to send fake notifications but there wouldn't be anything on the sender's server to pick up; also the system would presumably validate that a notification for a message from sender@example.com actually comes from a server which is allowed to send notifications for example.com, à la SPF) and doesn't require receivers to download and store messages they're going to discard later because they're spam. Backscatter-type spam is eliminated completely because there is no need for “bounces” in the first place. Just a thought.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 10, 2025 11:43 UTC (Mon) by paulj (subscriber, #341) [Link] (28 responses)

> That may be the case, but today I am the email provider for myself and a few friends, and I would like to keep doing this. If, to continue to be my own email provider, I would have to connect to some payment system and deal with all the legal red tape required to be a commercial entity (

Again, all jurisdictions I am familiar with have thresholds and exemptions for non-commercial and/or no-low revenue businesses. E.g., VAT... there are thresholds and you have to have a fairly non-trivial business before you are required to register for VAT. If you are not making money, there are no tax liabilities and unlikely to be even be reporting obligations (unless, again, you have some large revenue on which you're making no money). I am unsure what other regulations you think might apply to running a small commununication system for friends, for which you might have to have them contribute money in some online-payment system - even reporting obligations for financial transactions have thresholds that are at set at least €1000 across the EU.

Alternatively, just go anonymous. A system secured against spam by money, or other proof of resources, can have anonymous nodes.

So... it's just a strawman. There are no regulations nor taxes that would apply to some trivial-scale "friends and family" next-gen-email-replacement system.

There are open-source projects in this space already, and you can run their servers if you wish. E.g., Session (partly a signal fork, but replacing the messaging fabric). The notion you need to register a company and pay taxes to run a Session server and have it participate in the swarms is just flat out false.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 10, 2025 13:32 UTC (Mon) by pizza (subscriber, #46) [Link] (23 responses)

> So... it's just a strawman. There are no regulations nor taxes that would apply to some trivial-scale "friends and family" next-gen-email-replacement system.

Methinks the "strawman" here is one of your own construction.

Remember, you're not interacting with "friends and family", you're interacting with everyone said friends+family communicates with, and that's going to necessarily include complete strangers and businesses of all sizes. (If it was just a closed friends+family system, you have an alternate trust system and can eschew all of this automagic micropayment system entirely!)

Meanwhile, if you interact with real-world currencies, you will run into voliminous regulations and the 5th circle of hell that are payment processing systems. This goes well beyond the scope of taxation; Look up the UCC sometime.

Sending e-mail via a possibly sanctioned entity

Posted Nov 10, 2025 14:11 UTC (Mon) by farnz (subscriber, #17727) [Link] (22 responses)

And note that you can't control who your friends and family choose as mail providers. For example, my home mail server (used by 3 people - me, my spouse, my mother) regularly sends mail to a server belonging to a sanctioned entity. This is fine, legally speaking, because no money is involved; I'm forwarding data from my mother's mail client to her friend's mail server, and the mere act of forwarding data is not sanctioned.

The moment money gets involved, though, I have to ensure that I don't attempt to pay for mail delivery to this person's mail server, because if I do so, I will be in breach of sanctions law. And the easiest way to handle this problem is to pay someone who already handles sanctions law as a matter of course - stop running my own server, and just pay for Google Workspace or similar.

Sending e-mail via a possibly sanctioned entity

Posted Nov 10, 2025 15:02 UTC (Mon) by paulj (subscriber, #341) [Link] (21 responses)

> The moment money gets involved, though, I have to ensure that I don't attempt to pay for mail delivery to this person's mail server, because if I do so, I will be in breach of sanctions law.

As stated before, it is technically possible to have a distributed system that includes or relies on a distributed ledger payment system where no one can determine from the ledger, with any useful certainty, how much was sent by whom to whom. Only the sender knows how much was sent to which sub-address. The recipient knows how much was received to which sub-address, but not the address from which it was sent. I.e., a CryptoNote protocol.

Such non-transparent payment systems will ultimately dominate in the space for online, decentralised, distributed payment systems (and already do!), precisely because the older technology of transparent public ledger systems become mired in unworkable regulations. Eventually, the regulatory system will lose here and have to concede - just like in the previous regulatory war on maths in the 90s.

Sending e-mail via a possibly sanctioned entity

Posted Nov 10, 2025 15:41 UTC (Mon) by paulj (subscriber, #341) [Link] (19 responses)

Oh, and for clarity, as stated before, this means the wider distributed messaging system can be made so that the sending node that sends a payment for a message or set of messages does not know which other set of nodes ultimately are reimbursed for participating in the communication of those messages.

Ergo, users are not sending any money to any specific node. Ergo, users in regime X, where regime X dislikes another regime Y enough that it has punitive sanctions against people within the reach of regime X who might do such terrible things as send messages within a distributed system that happens to have some participant nodes located in or run by people in regime Y, can not be said to have interacted in any way with regime Y.

The shocking rise of illiberalism, even neo-fascism, *across the world* will simply accelerate the adoption of privacy-protecting distributed messaging and payment systems. (Session - getsession.org - possibly being the best of what is workable, at this time, in the messaging system space).

Sending e-mail via a possibly sanctioned entity

Posted Nov 10, 2025 21:07 UTC (Mon) by pizza (subscriber, #46) [Link] (18 responses)

> Ergo, users are not sending any money to any specific node.

LOLwut?

Party A wants to send email to party B. To do so a token of some "value" must be transferred that can be converted to/from "money" at either end.

No matter how much technical handwavery you layer in the middle, there's no escaping that fundamental fact, nor the fact that national governments have _very_ strong opinions (ie "laws" backed up by literal armies) on the subject of "transferring tokens of value".

It doesn't matter what value I transfer to a sanctioned entity, or how I do it. Legally it only matters that I did so (or directed someone else to do so on my behalf).

> The shocking rise of illiberalism, even neo-fascism, *across the world* will simply accelerate the adoption of privacy-protecting distributed messaging and payment systems.

I'd agree with you on the messaging front, but *payment systems* are another matter entirely. The fundamental problem with distributed payment systems is how said system converts into "real" currency on either end.

Sending e-mail via a possibly sanctioned entity

Posted Nov 11, 2025 10:37 UTC (Tue) by paulj (subscriber, #341) [Link] (17 responses)

We're agreed there is rising illiberalism across the world, notably in previously liberal, western democracies. I would view the ever restrictive laws on anonymity, the ever greater control our states have as a problem - given how this can be abused. The rising illiberality makes it a pressing problem.

To fight illiberalism requires the ability to associate. To fight illiberalism in a state that is willing to use the tools of control against opponents (as has now happened in a number of western "liberal democracies", against dissident motivations across the spectrum - it's not a question of left or right) requires the ability to associate anonymously (at least, anonymous to outsiders). Effective association requires some anonymity in communication, and in acquiring and distributing resources.

To object to such tools because "Lolwut? govs wont like it bruv" is simply not an argument worth considering.

Sending e-mail via a possibly sanctioned entity

Posted Nov 11, 2025 12:38 UTC (Tue) by malmedal (subscriber, #56172) [Link] (16 responses)

> requires the ability to associate anonymously

No, anonymity is helpful if you want to subvert a democracy. Crypto is helpful for paying agitators in a deniable way(e.g. where does Tommy Robinson get money for his luxury vacations?)

If you want to overthrow a dictatorship(what's the point of using euphemisms like illiberal?) what you need is a mass movement that is too big for the state to handle.

The greater control a state today has because of surveillance is because of the current state of technology, you are not changing that by getting democracies to restrain themselves with laws. A dictator will just ignore these, making them completely pointless.

Sending e-mail via a possibly sanctioned entity

Posted Nov 11, 2025 14:05 UTC (Tue) by daroc (editor, #160859) [Link]

Okay -- The micropayment stuff was interesting, if not exactly on topic, but this has strayed far from the original topic. Let's stop here, please.

(Remember Debian? This is a song about Debian ...)

Sending e-mail via a possibly sanctioned entity

Posted Nov 11, 2025 14:08 UTC (Tue) by paulj (subscriber, #341) [Link] (3 responses)

Not all just movements are popular initially. Some oppression can be restricted to small groups - and hence opposition will not easily or quickly rally mass support. One man's freedom fighter is another man's terrorist. A terrorist today is a brave freedom fighter tomorrow (a wanted terrorist was just in the US white house).

It is interesting to see how my generation of techies - who when they were young would have nearly all been involved in or at least strongly supported the cypherpunk movement and been against the government in the crypto-wars of the 90s - have with often become more conservative at least in terms of supporting state control. People who once would have invoked May's (popularised by Schneier) four horseman of the Internet as a derisory label, now invoke those horsemen in support of the ever broadening tech-panopticon surveillance state.

Sending e-mail via a possibly sanctioned entity

Posted Nov 11, 2025 15:34 UTC (Tue) by malmedal (subscriber, #56172) [Link] (1 responses)

You don't seem to understand my point, crypto-currencies are only a useful tool against an opponent who are unwilling to use the standard dictatorship playbook, such as torture, arresting family members etc.

It's possible to write a fictional scenario where these really are the bad guys, but currently on planet earth none of the far to few countries that are actually respecting the rule of law deserve to be overthrown.

Your specific example refers to Syria, the old regime would have collapsed years earlier if they hadn't been propped up by the drug trade and associated money laundering so crypto was very much on the wrong side there.

Sending e-mail via a possibly sanctioned entity

Posted Nov 11, 2025 17:59 UTC (Tue) by paulj (subscriber, #341) [Link]

For clarity, and without intending to further the discussion. My reference to Syria was solely to illustrate the "One man's freedom fighter...." concept. My references to rising illiberalism were meant largely to refer to western democracies, which (to me) are steadily inching down ever more totalitarian paths - on both sides of the political spectrum (as and when they gain power). States already highly illiberal are of course also a concern.

Sending e-mail via a possibly sanctioned entity

Posted Jan 24, 2026 2:16 UTC (Sat) by paulj (subscriber, #341) [Link]

Oh, and it is absolutely not hyperbolic to say they want to build a panopticon. Cause they literally say it loud. The Home Secretary of the UK, in a recent interview with The National:

“When I was in justice, my ultimate vision for that part of the criminal justice system was to achieve, by means of AI and technology, what Jeremy Bentham tried to do with his Panopticon. That is that the eyes of the state can be on you at all times.”

https://www.thenational.scot/news/25780001.shabana-mahmoo...

The desire of our securocrat states is very clear and open.

Sending e-mail via a possibly sanctioned entity

Posted Nov 11, 2025 15:51 UTC (Tue) by NAR (subscriber, #1313) [Link] (10 responses)

If you want to overthrow a dictatorship(what's the point of using euphemisms like illiberal?) what you need is a mass movement that is too big for the state to handle.

In Hungary (an illiberal democracy) the mass movement (a new opposition party) that grew too big to handle was (partly) sparked by an anonymous report that the president pardoned a pedophile-enabler. As far as I know, the guy who noticed that pardon (buried in official communication) and sent it to the press is still anonymous. So having an anonymous communication format has it merits even if a mass movement is required to replace the government.

Sending e-mail via a possibly sanctioned entity

Posted Nov 11, 2025 16:48 UTC (Tue) by malmedal (subscriber, #56172) [Link] (9 responses)

apologies if I'm not being clear, I'm only objecting to secret payments, not secret messages.

it is in a democracy's own best interest that its citizens can communicate safely without being overheard.

Sending e-mail via a possibly sanctioned entity

Posted Nov 11, 2025 18:12 UTC (Tue) by paulj (subscriber, #341) [Link] (8 responses)

As a final response.

We want to communicate anonymously (from the POV of others), without being overheard. We have looked at our threat model and our security requirements, and determined it is best served by obtaining phones running GrapheneOS. You lack the resources to obtain such a phone, and further the regime you are in views the purchase of secure phones as very suspicious - and you are likely to be put (at a minimum) under observation if such a purchase is detected. We have determined that it is best I purchase the phone for you (you havn't the resources), and we do so as anonymously as possible (so we have at least some plausible deniability if detected, e.g. intercepted shipment). I am known, in the wider world, to be associated with you.

One option is for me to use Tor to go to an anonymous online bazaar. Then to use an anonymous distributed payment method to buy a GrapheneOS phone, and have it shipped it to you (ideally, some drop-box or shared address that is at least not /uniquely/ associated with you). You and I know, from experience of others, that there is a minimal intercept rate on such shipments.

This is NOT an unrealistic example of how anonymous communication systems AND anonymous payment systems can be used to help protect activism in some places.

Sending e-mail via a possibly sanctioned entity

Posted Nov 11, 2025 20:12 UTC (Tue) by pizza (subscriber, #46) [Link]

> This is NOT an unrealistic example of how anonymous communication systems AND anonymous payment systems can be used to help protect activism in some places.

This is an example of a quasi-anoymous communication system that sorta works (except for the glaring problem that it's a literal *phone* which means you're going to be "anonymously" tracked by $telco and/or anyone running an ISMI catcher)

Take away the "phone" part of that and you can piggyback off of public/"open" wifi, again for varying degrees of anonymity. That said, a not-terribly-repressive regime can easily require folks to require some sort of government ID and/or tied to your device [1] as a condition to grant access to said wifi. And said regime can easily require all traffic to be routed through "great firewalls" or some other classification/inspection/tracking system [2]

And sure, you can interpose middlemen, but when $oppressive_regime has no qualms about disappearing its own citizens, all you'll accomplish is a slight delay in how long it takes your door to be kicked in.

> One option is for me to use Tor to go to an anonymous online bazaar. Then to use an anonymous distributed payment method

Again, the vulnerability here is the ability to convert this "payment method" into $national_currency on either end. Those exchanges are the choke points that governments can, and do, go after.

...I keep coming back to the "what threat vector are you trying to protect yourself against" question. Because a guido wielding a gympie trounces technical handwavery... every. single. time. (see xkcd #538)

[1] I experienced this a decade ago when traveling in the Middle East.
[2] This capability continues to be demonstrated by China

Sending e-mail via a possibly sanctioned entity

Posted Nov 12, 2025 13:12 UTC (Wed) by malmedal (subscriber, #56172) [Link] (6 responses)

> This is NOT an unrealistic

It's unrealistic to the point where it looks like a parody. Is it intended as one?

Phones are widely available in almost all countries, it is rarely a hard to get item. In a country where they are hard to get, North Korea, they have implemented some kind of authorization scheme so only government provided phones can actually connect to the network, an activist firing up your graphene os phone will be arrested immediately.

(I believe they do have provisions for tourists calling abroad, but an activist trying this will be noticed and arrested)

Sending e-mail via a possibly sanctioned entity

Posted Nov 12, 2025 17:08 UTC (Wed) by paulj (subscriber, #341) [Link] (5 responses)

"It's so unrealistic it's a parody!"...

1. proceeds to give an example of a country where phone purchases generally are restricted as described
2. fails to spot that my comment says "You lack the resources to obtain such a phone", so either I have to send you money somehow (anonymously) or I have to send a phone.
3. I may also be in the same restrictive regime, I just happen to have the resources to be buy the item.
4. There may be numerous other types of items useful to activism that one may wish to purchase for oneself or others anonymously.

If your argument really is that activists never need to buy anything that may be sensitive, where anonymity is desirable, then it is your argument that is parody.

Sending e-mail via a possibly sanctioned entity

Posted Nov 12, 2025 17:10 UTC (Wed) by paulj (subscriber, #341) [Link]

Also, even if one lives in a country where phone purchases are not of themselves restricted, it may still be desirable to not leave a record for the tech-surveillance panopticon that you purchased a very particular model of phone capable of running a more secure OS.

Sending e-mail via a possibly sanctioned entity

Posted Nov 12, 2025 19:02 UTC (Wed) by malmedal (subscriber, #56172) [Link] (3 responses)

> "It's so unrealistic it's a parody!"...

> 1. proceeds to give an example of a country where phone purchases generally are restricted as described

No, I'm pointing out that anybody trying to use your OS if likely to be arrested very quickly. The phone will need to authenticate itself to the network in order to prove that it is indeed an approved phone with the correct spyware installed.

> 2. fails to spot that my comment says "You lack the resources to obtain such a phone",

No, I'm saying that phones are ubiquitous, access to one is not a limitation and I'm saying that getting a Graphene OS phone is not going to help if you are physically in a dictatorship.

What activists need to do is to make their electronic signature as innocent as possible. One common tactic is to post coded messages to a popular forum that also used by normal people.

With your solution, as soon as the police finds the first activist with with a Graphene device, they will know what the traffic looks like and can use that that simply the search for the rest.

Sending e-mail via a possibly sanctioned entity

Posted Nov 12, 2025 19:28 UTC (Wed) by pizza (subscriber, #46) [Link]

> What activists need to do is to make their electronic signature as innocent as possible. One common tactic is to post coded messages to a popular forum that also used by normal people.

Along those lines, the Iranian revolution in the late 70s was famously seeded via already-ubiquitous cassette tapes of Khomeni's speeches.

Sending e-mail via a possibly sanctioned entity

Posted Nov 13, 2025 10:11 UTC (Thu) by farnz (subscriber, #17727) [Link] (1 responses)

The key to this is that "innocent until proven guilty" is an artefact of liberal societies. If you're in an illiberal society of some form, once you've been identified as a troublemaker, you will be found guilty of something; if necessary, police will plant or forge evidence to show that you've been involved with something society at large considers abhorrent.

Thus, your goal is to not do anything that would give the police a reason to look at you; you're reliant on the fact that there's more citizens than police, and thus they cannot monitor everyone in depth. The moment you do something that marks you out as "odd", you're either fully compliant with the regime (just slightly weird - maybe you like brandy more than vodka), or you're marked out as a troublemaker and they will find a way to get you.

Sending e-mail via a possibly sanctioned entity

Posted Nov 13, 2025 11:54 UTC (Thu) by malmedal (subscriber, #56172) [Link]

Arresting innocents is a common tactic yes. I forget the name, but a Soviet dissident recounted a conversation that went like "how long are you in for?" "Fifteen years" "what for?" "Nothing at all" "you're lying, nothing at all is ten years"

Sending e-mail via a possibly sanctioned entity

Posted Nov 10, 2025 16:13 UTC (Mon) by Wol (subscriber, #4433) [Link]

> As stated before, it is technically possible to have a distributed system that includes or relies on a distributed ledger payment system where no one can determine from the ledger, with any useful certainty, how much was sent by whom to whom.

And as far as I can tell, both you and farnz are in violent agreement on this point!

As farnz keeps on banging on, the problem is SOCIAL, and there is no way from a SOCIAL perspective that anything like this will take off.

Cheers,
Wol

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 10, 2025 17:48 UTC (Mon) by anselm (subscriber, #2796) [Link] (3 responses)

So... it's just a strawman. There are no regulations nor taxes that would apply to some trivial-scale "friends and family" next-gen-email-replacement system.

Sez you. When the tax man rings my doorbell I'll refer them to you.

Anyway, as I said, the whole payment-for-mail issue is moot as far as I'm concerned because, as I've outlined in my previous message, there are better approaches for “next-gen-email-replacement systems” that don't even involve money (let alone shady cryptocurrencies).

Incidentally, one problem that makes me not like the pay-to-play approach to email is that I run a bunch of mailing lists (some with a few hundred subscribers). If I need to pay a trivial amount for each email message sent across these lists, that trivial amount times the number of subscribers times the number of messages per day at some point becomes not quite so trivial anymore. The obvious solution to this is to charge mailing list subscribers, but then hey, suddenly instead of someone with a fun hobby I'm a news publisher running a paid-for service for the public and again all sorts of regulations start to apply (apart from the hassle connected with having to ensure that every subscriber puts their contribution into the kitty). Why would I ever go for that sort of thing when right now I don't need to pay anything above the cost of the mail server, which is a trivial amount?

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 10, 2025 18:00 UTC (Mon) by paulj (subscriber, #341) [Link] (2 responses)

> (apart from the hassle connected with having to ensure that every subscriber puts their contribution into the kitty)

If your understanding of what I've been sketching is a system where you have to manually charge people each time they send their message to your distribution group, then... let's just leave this. (It's way OT anyway).

Also, again, there's no tax obligations for a group of people running systems for informal associations. There are all kinds of clubs out there, where people pay money to cover the costs the activity of that club (e.g. hosting a website, hosting races for things like running and cycling clubs, buying club kit, etc.), and it's all on an unincorporated basis and there are no tax obligations on the club or the person who handles the money for the specific activity that generated the cost, if there are only costs involved. Both English and Irish law definitely have the concept of unincorporated associations, I know this for a fact, and I'm pretty sure there is an equivalent in germanic jurisdictions - that probably then covers very large swathes of the world, given how many others derive from those in some way.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 10, 2025 18:04 UTC (Mon) by paulj (subscriber, #341) [Link]

If you search for unincorporated association you will find the UK HMRC page that says what I wrote there, as you don't believe me.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 10, 2025 19:02 UTC (Mon) by anselm (subscriber, #2796) [Link]

If your understanding of what I've been sketching is a system where you have to manually charge people each time they send their message to your distribution group, then... let's just leave this.

Now you're building the strawman. Obviously, the way this would really work is that people subscribe to the mailing list in the way they would subscribe to a magazine, i.e., X amount of money/month gets you everything that goes through the list. You would calibrate X such that your cost to send N messages per month to M subscribers would be less than X*M. Depending on the readership and volume of your mailing list, X*M can be a non-trivial amount of money. You would still have to have some sort of infrastructure to sort out every subscriber's payments (especially since, for d…n sure, you don't want every subscriber to have to deal with the likes of Monero), and depending on how big X*M is, you're absolutely running a commercial enterprise here.

Again, the whole idea of founding an email system on micropayments is something that will never fly, anyway. There are better ways to fix email which also require large numbers of participants to warm to the idea but don't involve money.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 16:00 UTC (Fri) by paulj (subscriber, #341) [Link] (5 responses)

Ok, I think I know why I never replied. Nothing went to that address at that time.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 16:14 UTC (Fri) by farnz (subscriber, #17727) [Link] (4 responses)

OK, so how do we debug this? I sent the money, as far as I can tell, and you didn't receive it. My records show that I paid Binance to send 87774rpgLdmjCFLqyV3BYN6VwBzdvaVbccVUF2K3NHGEFyoQKxCTqcxeDcPHpQPixqitthXhYK5uGbYuFExff24ACiaAUkH a total of 0.012 Monero just before posting that comment; your records show that it never arrived.

From my end, this is undebuggable; I know how to handle it in normal cases, but not here.

With the conventional banking system (SWIFT, for example), I'd raise a complaint with my bank; they would then identify where they sent the money, and would either present to me proof that it had been received at the intended recipient, or refund me if they could not trace it to the destination I told them to send it to.

With the card system, I'd open a merchant dispute via my card company, and they'd give the merchant a chance to respond to the dispute (which identifies the transaction to the merchant as part of the dispute, so if it's just bad record keeping at their end, it gets fixed). If the merchant doesn't respond adequately, in the view of my card company, then I get a refund.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 16:54 UTC (Fri) by paulj (subscriber, #341) [Link] (3 responses)

Your custodian should have the transaction ID and the transaction view key. You may be able to retrieve that. Though, if your custodian batched up outgoing payments, they probably won't give you the tx view key. You should still be able to talk to the support of your custodian and have them debug it. Just like SWIFT or whatever... (Though I think Binance have since delisted Monero, cause it's too good).

I suspect your custodian has a minimum withdrawal amount, and the .012 XMR was well below that and hence was never sent. Whether that resulted in the amount being taken from your balance with them, I don't know. In which case, for tiny payments you would need to use a proper wallet under your control (e.g., Monerujo is on F-Droid, perhaps Cake wallet is good too). That's a social issue wrt demand, at this point in time - not a technical one.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 16:57 UTC (Fri) by farnz (subscriber, #17727) [Link] (2 responses)

I'll ask Binance for those details; they did confirm that they sent 0.012 XMR, however, but didn't give me a transaction ID or a transaction view key at the time.

The reason for sending that much is that it put me just over their minimum withdrawal amount (I paid them £2.50 in total, including their fees, plus the cost of the Monero).

But again, this is a migration issue - I'm not plugged into the Monero ecosystem, and I have no idea how I'd get Monero other than via a company like Binance. Again, if your system depends on people plugging into Monero, how do you expect people to know this sort of detail?

And it's not like SWIFT, because with SWIFT, I identify the transaction to my bank, and they take responsibility for following through to confirm that it either arrived, or didn't. If it didn't arrive, they'll refund me - and I can try again. I'm not entirely sure what the equivalent is here.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 10, 2025 12:00 UTC (Mon) by paulj (subscriber, #341) [Link] (1 responses)

You can get a monero wallet from, e.g., the Monero website, or from the f-droid store (e.g. monerujo). Create a wallet, reply with an address and I or someone else may well reimburse your previous costs. ;)

I don't think Binance list Monero anymore, cause Monero is too good at what it does. We're basically in the middle of a repeat of the US' war on cryptography in the 90s. Mostly led by the EU this time. Just like then, it will fail, cause you can not unlearn and ban math. The CryptoNote paper exists, it's a beautiful paper - probably one of the seminal works in distributed consensus along with the papers on Bitcoin, Paxos, Radia Perlman's Byzantine General Routing System paper/Ph.D., Lamport's clock, and such - and they can not make it go away. Just like in the 90s, they will lose. (I need to get a T-shirt printed with the key equations from CryptoNote, like the old RSA t-shirts from the first crypto war).

So yes, this technology is still early days, it is not well integrated into other things, and it won't be for a while for various social reasons around distributed payment technologies and the clash these cause with state desires for tight control. Distributed payment technologies will win out eventually though.

If you want an entity to deal with, who will handle everything and indemnify you, there will be such entities. The existence of a technology that allows anyone to participate does NOT prevent anyone setting up a business around it so you can have a more traditional interface to it.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 10, 2025 12:10 UTC (Mon) by farnz (subscriber, #17727) [Link]

I don't see how to set up a Monero wallet that accepts GBP; I sent you money using a payment from my credit card (which I paid off).

It sounds, though, like what you're saying is "this technology is too new and unreliable for people not yet willing to dive in fully", which in turn makes it completely unsuitable for sending money to pay for e-mail delivery. I have to commit to replacing my existing financial management (which I'm happy with) with a new technology I don't fully understand or trust, replace my existing private mail server (which I'm happy with) with a new one that I don't fully understand or trust, and do so for questionable benefits (since the assertions around what's going to work in the new system are at odds with the history of Prestel and of SMS).

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 17:21 UTC (Thu) by dskoll (subscriber, #1630) [Link] (3 responses)

Once again:

  • The problems are not primarily technical.
  • Please explain to my non-technical Mom how she needs to send email to her cousin going forward.
  • Criminals will get around it anyway.
  • The problems with the current email system have so far proven too mild to spur the adoption of any of hundreds of similar proposals. See the FUSSP link I posted earlier.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 10:34 UTC (Fri) by paulj (subscriber, #341) [Link] (2 responses)

It's not technical, indeed.

Your mother? Probably nothing changes.... She keeps paying with her eyeballs and data. Others may choose to avoid that and pay actual money in some new communication system. That's how it already is today with email. The only thing that changes is that instead of layers of hacky side-protocols under the hood to try stop spam, you just have one clean micro-payment layer to make spam uneconomical. The business model around that, that affects UX, can vary in many ways.

You would not design a new messaging system in the way email is today.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 13:05 UTC (Fri) by pizza (subscriber, #46) [Link] (1 responses)

> You would not design a new messaging system in the way email is today.

Of course not. You'd design it to be controlled by a single party (ie you), only accessible via official applications (backstopped by DRM), and explicitly monetized (everyone pays-to-use *and* forced unskippable advertisements) with all payments going solely to you.

(ie the wet dream of AT&T and what every big-tech's IM system aspires to be)

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 13:16 UTC (Fri) by dskoll (subscriber, #1630) [Link]

pizza is right. Anything designed today would benefit oligarchs and data brokers and oppress its "users". We should thank our lucky stars email became entrenched before the Internet enshittified.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 15:20 UTC (Thu) by dskoll (subscriber, #1630) [Link] (8 responses)

In general, we /do/ have a way. The technical ability is there.

Well, I dispute that. As I pointed out, criminals will hack the system in a way that can't effectively be protected against without turning email into some walled-garden proprietary system.

But more to the point: Even if the technical ability were there, there's no incentive to switch. Anything that makes email more "secure" is also going to make it less convenient, and convenience is the #1 selling point of email.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 8:37 UTC (Fri) by taladar (subscriber, #68407) [Link] (7 responses)

If you think email is convenient you have never used email as either a user, a programmer/admin trying to send emails or the admin of a mail server. Email is a giant pain to use from pretty much every role in the system, mainly because it is a mess of semi-underspecified standards leading to half compatible implementations with a pile of half solutions to the spam problems piled on top.

The main selling point of email at this point is "everyone uses it", so basically just network effects.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 13:15 UTC (Fri) by dskoll (subscriber, #1630) [Link] (1 responses)

I have used email in all of those roles: As an end-user, as a programmer/admin trying to send email, and as a mail server administrator.

It's very convenient as an end-user, not too bad as a programmer, and a little annoying but manageable as a mail server administrator. I was also in the email security field for almost two decades and helped administer systems with hundreds of thousands of users... so I know email!

Don't discount the network effect. It's huge. And it's why none of the countless proposals similar to yours have ever gained much traction; people have viewed them as too much cost for too little benefit.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 13:23 UTC (Fri) by raven667 (subscriber, #5198) [Link]

> If you think email is convenient you have never used email as either a user, a programmer/admin trying to send emails or the admin of a mail server.

I think you have greatly misjudged the experience of the person you are talking to, so this comment is probably a mistake.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 13:52 UTC (Fri) by anselm (subscriber, #2796) [Link] (4 responses)

The nice thing about email is that it works reasonably well for the vast majority of people without arbitrarily restricting the set of people one can communicate with to those who are prepared to subscribe to the same more-or-less-proprietary walled garden as oneself. Sure, Signal (for example) is nice but you only get to use it to talk to other people who are also on Signal, using a special program you need to install that is only good for talking to other people on Signal, and may or may not even be available for the platform you're using. If the people who run the Signal servers ever get tired of doing it¹ then congratulations, you get to find a new service where all the people hang out who you used to talk to on Signal, and hope that whatever program you need to use to get on that service will also run on the computer(s) you'd like to use. And so on².

With email at least, the underlying “mess of semi-underspecified standards” is sufficiently well-understood by enough people all over the place that the service itself will not be going away anytime soon. We cannot guarantee the eternal existence of any particular mail server instance or piece of software used to send or process email, but it is overwhelmingly likely that you will always be able to find some MUA that runs on your system (however unusual) and can connect to some MTA in order to get email from you to whoever@somewhere.com. In a pinch you could even write your own. For all its obvious shortcomings and all the legitimate criticism one could level at the email system, it's what we have, it's everywhere, and so far nobody, as in nobody, has been able to come up with a viable contender to replace it that doesn't involve a walled garden or single centralised point of failure of some kind. It may be “just network effects”, but those network effects are pretty hard to beat.

1. We can debate about how likely that is to happen, but in point of fact it's not as if you have a contract with the Signal people that says they can't simply stop providing the service to you whenever they feel like it. Certainly recently when the EU was debating forcing messenger services to scan messages for unwanted content, Signal was considering withdrawing from the EU altogether, which would obviously have sucked for Signal users in the EU (certainly those without the wherewithal to use a VPN to connect to somewhere where Signal is still available).

2. Sure, you could run your own Signal server, but then you would need to convince everyone you want to communicate with to use that particular server, too. (So instead you use Mastodon, but that of course comes with its own set of issues and restrictions, and of course you would need to convince everyone you want to communicate with to also use Mastodon.) With email, you can run your own server and it will generally be fine for communicating with people on arbitrary other servers.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 16:10 UTC (Fri) by paulj (subscriber, #341) [Link] (3 responses)

> With email, you can run your own server and it will generally be fine for communicating with people on arbitrary other servers.

This isn't true. You may be able to receive email, but you will struggle to have others receive email you send, unless you spend a good bit of time configuring various hacky side-protocols and testing them and maintaining them. That's sort of the origin of this off-story-topic sub-thread.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 16:20 UTC (Fri) by pizza (subscriber, #46) [Link] (1 responses)

> unless you spend a good bit of time configuring various hacky side-protocols and testing them and maintaining them.

I set up DKIM on systems I administer nearly seven years ago. I don't recall it being particularly challenging (on the order of a few hours), and I am not exaggerating when I say it has required zero maintenance since.

Honestly, email barely even registers on the "list of headaches involved in running public-facing services" these days.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 18:59 UTC (Fri) by dskoll (subscriber, #1630) [Link]

Also. I've been hosting my own email on behalf of a company I used to own since 1999, and self-hosting my personal email since 2018. The initial setup took some time, but there's no ongoing maintenance needed for DKIM/DMARC/SPF unless you make changes to your network topology, and that hasn't yet happened for me. It's really not all that hard, and IMO we need a wide variety of email hosting providers and self-hosters to ensure that concentration amongst the Big Ones never reaches the point where they can unilaterally change the price of admission.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 7, 2025 17:14 UTC (Fri) by anselm (subscriber, #2796) [Link]

These days you can get nifty oven-ready container-based email systems – usually based on Postfix, Dovecot, and the like –which will take care of that stuff for you. But even setting up SPF and DKIM from scratch isn't exactly rocket science. There are loads of web pages which explain how to do it, in easy-to-follow steps, and doing just that will take you a long way towards being able to send email wherever you like.

I've been running mail servers (on my own behalf and that of various companies and non-profits) and teaching other people how to do it for 30+ years now, and it's generally not something I'm losing any sleep over. As far as I'm concerned, claims like “you will struggle to have others receive email you send” are wildly exaggerated.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 8:26 UTC (Thu) by taladar (subscriber, #68407) [Link] (6 responses)

Personally I think that the 90% of automated emails that come from systems where I have an account anyway would be much better served by some simple web hook and a message format that includes more information on what it is actually sending.

That way I wouldn't have to e.g. login to my bank website to see their actual message or download their monthly list of transactions as a PDF just because email is insecure.

Messages that communication from other people with some kind of email notification tacked on could be sent directly to me as desktop notifications or phone push notifications by my server if I wish, maybe even according to some rules.

Email seems like a bad format for that.

Email as an account recovery or login control tool is also pretty bad, especially the way everyone uses email as logins and can thus associate my accounts on a vast number of platforms with each other once each of them had a data breach.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 8:46 UTC (Thu) by Wol (subscriber, #4433) [Link]

> That way I wouldn't have to e.g. login to my bank website to see their actual message or download their monthly list of transactions as a PDF just because email is insecure.

And yet they were quite happy to send stuff by snail-mail, which is arguably even less secure!

Once you've verified the end points, email is as - or likely more - secure than snail mail. Sure stuff can get lost. Sure a determined cracker can steal email in transit. But the only place it's likely to get stolen from is the customer's own system, and forcing the customer to log in and retrieve a message or PDF provides absolutely no security there!

And as implemented, where you have to login to read messages, can be a disaster too. My "Building Society" (it was one - thanks to the mess of UK Banking reforms I don't have a clue what it is now) seems to be a bit clueless on that front. I got sent an important - time sensitive - message via their internal messaging systems, only for me never to see it because I got no notification whatsoever it was waiting for me. The zeroth rule of successful investing (which the investment firms are desperate for us to break because it earns them loads of lovely commission) is treat investments like mushrooms - leave them alone in the dark until they mature. Which I did, so I never logged in, and never saw the message ... WHOOPS!

Cheers,
Wol

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 9:15 UTC (Thu) by anselm (subscriber, #2796) [Link] (1 responses)

That way I wouldn't have to e.g. login to my bank website to see their actual message or download their monthly list of transactions as a PDF just because email is insecure.

My bank apparently thinks that PGP-encrypted email is secure enough to send me individual notices of transactions on my current account, but not secure enough to send me monthly statements or other types of communication. I should probably be grateful for small miracles.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 11:29 UTC (Thu) by paulj (subscriber, #341) [Link]

You have a bank that knows how to send PGP encrypted email? Wow :)

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 14:38 UTC (Thu) by dskoll (subscriber, #1630) [Link] (2 responses)

I actually hate having to log in to some system or even visit a web site just to read a message that could have been sent by email. The absolute worst are the ones that send you an email just to tell you that you have a message you need to read. Just send me the damn message in the first place!!

I don't want phone or desktop notifications for most things. Those are far more intrusive than emails because they generally make a noise or pop something up that demands attention. A unexpected withdrawal from my account? Yes, interrupt me. A notification that my statement is ready? No, do not interrupt me! If I get too many notifications, I'll block them which will defeat the purpose of important notifications getting through.

I agree that relying on email for account recovery is not all that secure. But until everyone has a Yubikey that they never lose (plus a spare!) and uses it religiously, we're kind of stuck with best-effort mechanisms.

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 15:14 UTC (Thu) by geert (subscriber, #98403) [Link] (1 responses)

The best ones are the emails from the government (delivered multiple times, through multiple portals), which tell you you have a new message.
After logging in securily, you can download the message, which is a PDF file containing a nice formal letter on government letterhead telling you you have a new document at another government site.
After logging in on the second site, you can finally enjoy the real document, which turns out not to be that urgent and important anyway...

Email insecurity (was One of the great benefits of Open Source)

Posted Nov 6, 2025 18:59 UTC (Thu) by rschroev (subscriber, #4164) [Link]

Are you talking about Belgium? Because that sounds just exactly like it. Or is there another government with systems just as convoluted?

One of the great benefits of Open Source

Posted Nov 4, 2025 10:09 UTC (Tue) by farnz (subscriber, #17727) [Link]

I suspect those people are also "this was the dream machine when I was young", and so don't want to admit that, while their dream machine was great for its time, it's not a great machine any more, as we expect more of systems than we did back then.

You see this on the software side in a subset of RISC OS enthusiasts, VMS, AmigaOS enthusiasts and others - people for whom a specific OS was amazing 30 plus years ago, and who can't admit that the lack of development since means that it's no longer what it was back in the day (as opposed to those who do the work to keep their preferred OS usable for their needs).

And, to be absolutely clear, this excludes people like Taylor and Glaubitz, who do seriously hard work to keep their preferred systems working well.

One of the great benefits of Open Source

Posted Nov 5, 2025 22:08 UTC (Wed) by IanKelling (subscriber, #89418) [Link]

Just a shower thought: One of the first things I learned in computing was assembly for m68k. It can be rmed from Debian, but not from my brain or my very finite lifespan.

The tragedy of laptops

Posted Nov 8, 2025 17:27 UTC (Sat) by cesarb (subscriber, #6266) [Link]

> Older hardware often has better keyboards and nicer screens (to my eyes).

That is the tragedy of laptops. When you're using a desktop, your old keyboard from the middle 1990s with a USB 1.0 connector can be plugged directly or with a simple passive adapter to any modern computer, and even older keyboards with a PS/2 connector can be plugged with an active adapter (or in some cases directly, some new computers still have a PS/2 socket). Your old screen from the early 2000s with a DVI-D connector can be plugged with a passive adapter to many modern computers (and with an active adapter to nearly all modern computers), and even older screens with a VGA or DVI-A connector can be plugged with an active adapter (or in some cases directly, some new computers still have a VGA socket).

Laptops, on the other hand, have not only electrical incompatibility, but also mechanical incompatibility. Even parts from the same laptop line but last year's model will not be compatible with today's models, with very few exceptions (Framework being a notable one).