No, not the AI. Just the owner of means of production like AI.
The fact that capital owners successfully avoid contributing to the financing of our states and social systems is, in my view, one of the fundamental problems of our time.
An PWA primarily for my wife and my daughter. They can order their hot chocolate and their coffee as if they were going to grab something at a fancy café downtown, but instead it's at home and I'm the barista. It is quite nice to have for when my wife comes back from work and want something specific, or when we are waiting for the visit of a few friend, they can order exactly the available beverages and everything is ready when they're here.
It was also a good playground for me to implement Web Push notifications (to never miss new orders).
It's a basic Nuxt 3 app with Appwrite as the backend with rough edges, but much enough for our household use !
If you want to spam my phone with notifications, please visit my café : https://mytinycafe.com/alix
I'm CTO at a vertical SaaS company, paired with a product-focused CEO with deep domain expertise. The thesis doesn't match my experience.
For one thing, the threat model assumes customers can build their own tools. Our end users can't. Their current "system" is Excel. The big enterprises that employ them have thousands of devs, but two of them explicitly cloned our product and tried to poach their own users onto it. One gave up. The other's users tell us it's crap. We've lost zero paying subscribers to free internal alternatives.
I believe that agents are a multiplier on existing velocity, not an equalizer. We use agents heavily and ship faster than ever. We get a lot of feedback from users as to what the internal tech teams are shipping and based on this there's little evidence of any increase in velocity from them.
The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon.
According to the Financial Times, Roomba has sold more than 40 million robotic devices, most of them robotic vacuum cleaners.[a]
Many of those vacuum cleaners have cameras, can move around on their own, and are connected to the Internet. If they're taken offline, they stop working. Many have microphones too.
The new Chinese owner will get control of a network of tens of millions Internet-connected, autonomously mobile, camera/microphone-equipped robots already inside people's homes and offices.
More than 40 million is a lot. For comparison, the US has ~132 million households.
---
[a] https://www.ft.com/content/239d4720-aee4-443d-a761-1bd8bb1a1...
Props to them for actually updating their status page as issues are happening rather than hours later. I was working with claude code and hit an API error, checked the status page and sure enough there was an outage.
This should be a given for any service that others rely on, but sadly this is seldom the case.
I love the quote the article starts with:
> Neither of us had prior experience developing mobile apps, but we thought, “Hey, we’re both smart. This shouldn’t be too difficult.”
I think, 40 years from now when we're writing about this last decade or so of software development, this quote is going to sum it all up.
A prime example of premature optimization.
Permanent identifiers should not carry data. This is like the cardinal sin of data management. You always run into situations where the thing you thought, "surely this never changes, so it's safe to squeeze into the ID to save a lookup". Then people suddenly find out they have a new gender identity, and they need a last final digit in their ID numbers too.
Even if nothing changes, you can run into trouble. Norwegian PNs have your birth date (in DDMMYY format) as the first six digits. Surely that doesn't change, right? Well, wrong, since although the date doesn't change, your knowledge of it might. Immigrants who didn't know their exact date of birth got assigned 1. Jan by default... And then people with actual birthdays on 1 Jan got told, "sorry, you can't have that as birth date, we've run out of numbers in that series!"
Librarians in the analog age can be forgiven for cramming data into their identifiers, to save a lookup. When the lookup is in a physical card catalog, that's somewhat understandable (although you bet they could run into trouble over it too). But when you have a powerful database at your fingertips, use it! Don't make decisions you will regret just to shave off a couple of milliseconds!
> I even checked one of his responses in WhatsApp if it's AI by asking the Meta AI whether it's AI written, and Meta AI also agreed that it's AI written
I will never understand why some people apparently think asking a chat bot whether text was written by a chat bot is a reasonable approach to determining whether text was written by a chat bot.
Reminds me of "cancer alley" [1].
As somebody who's looked in to this a bit, the deeper I dug the more I ultimately moved toward the conclusion (reluctantly) that indeed big corporations are the baddies. I have an instinct to steel-math both sides, but not every issue has two compelling sides to it...
One example of them clearly being the baddies is them paying people to social media astroturf to defend the roundup pesticide online [2].
1. https://en.wikipedia.org/wiki/Cancer_Alley
2. https://galiherlaw.com/media-manipulation-comes-out-during-m...
Simplification of my digital self. Removed most of my online accounts. Removed all my VPS's. Removed most apps from my phone except core ones. Cancelled a lot of online subscriptions.
In the real world finally moved everything to USB-C. Gave all my old cables away. I have two chargers in my home and a handful of C to C cables. Everything connects to everything now.
Home is now downgraded to a dumb home. Lights work on physical toggles. No hubs or sensors anywhere. Heat and AC is with a dumb panel on the wall.
It feels freeing.
Thousands of systems, from Google to script kiddies to OpenAI to nigerian call scammers to cybersecurity firms, actively watch the certificate transparency logs for exactly this reason. Yawn.
> There were unspoken rules, commandments passed down from teacher to student, year after year. The first commandment? Thou shalt begin with a proverb or a powerful opening statement. “Haste makes waste,” we would write, before launching into a tale about rushing to the market and forgetting the money. The second? Thou shalt demonstrate a wide vocabulary. You didn’t just ‘walk’; you ‘strode purposefully’, ‘trudged wearily’, or ‘ambled nonchalantly’. You didn’t just ‘see’ a thing; you ‘beheld a magnificent spectacle’. Our exercise books were filled with lists of these “wow words,” their synonyms and antonyms drilled into us like multiplication tables.
Well, this is very interesting, because I'm a native English speaker that studied writing in university, and the deeper I got into the world of literature, the further I was pushed towards simpler language and shorter sentences. It's all Hemingway now, and if I spot an adverb or, lord forbid, a "proceeded to," I feel the pain in my bones.
The way ChatGPT writes drives me insane. As for the author, clearly they're very good, but I prefer a much simpler style. I feel like the big boy SAT words should pop out of the page unaccompanied, just one per page at most.
Tractors largely replaced human labour in farming about a hundred years ago. Should we have started taxing tractors?
I really have difficulties seeing AI as anything else than yet another type of machinery. If your argument is "but it's replacing ALMOST ALL human labour" - well, the same argument was valid for tractors a hundred years ago (when almost everyone was employed in agriculture).
I think that the real power of spaced repetition is not in flashcard applications like this. It is in behavior modification.
Let's take a real example to show how this works.
August 19, 2025. My wife called me in to help her decide what to do about a dentist that she thought was ripping her off. A couple of quick suggestions later, and she went to being mad at me about not having heard the problem through before trying to fix it badly. As soon as she was mad, I immediately connected with how stupid what I did was, and that this never goes well. But, of course, it was now too late.
Not a mistake I was going to make for a while. But, given my history, a mistake I was bound to make again.
I changed that. This time I stuck this into my spaced repetition system. Each time the prompt comes up, I remember that scene, holding in mind how it important it is to emotionally engage, not offer quick suggestions, and be sure to listen to the full problem in detail. It takes me less than 30 seconds. Reviewing this prompt, for my whole lifetime, will take less than 15 minutes of work. Just typing this up this time takes more work than I'll spend on it in the next several years.
This mistake hasn't happened since. Not once. And I believe it won't again in my life.
I have literally changed dozens of such behaviors. My wife says that it is like there is a whole new me. She can't believe the transformation.
All it took is looking at spaced repetition as general purpose structured reinforcement, and not as just a way to study flashcards.
>What is the expected compensation for participants? Compensation varies based on experience level and agency placement. Annual salaries are expected to be in the approximate range of $150,000 to $200,000. Benefits include health insurance, retirement plans, paid time off, and eligibility for performance-based awards.
>Tech Force will primarily recruit early-career technologists
So "early-career" but they're going to get paid GS-14/15 pay[1] in DC? New grad engineers in DC are going to be GS-7/9 at best. This is either a blatant lie, or created by someone who has no idea of how federal pay works (or both).
As an aside, I was a fed for >10 years and left last year for industry but stay in touch with friends still working federal jobs. Before this administration recruiting was extremely difficult and candidate quality was low. I've heard that it's nearly impossible now and in the last 18 months they've only been able to hire a single person. Federal jobs used to be considered stable, with good benefits, but low pay. Now they're unstable, the current administration is actively working to make benefits worse, and the pay is still really low.
[1] https://www.opm.gov/policy-data-oversight/pay-leave/salaries...
Our household (and I suspect many with us) bought a Roomba specifically to not give the Chinese government a roving camera in our home. Ouch!
This is the cost of complacency. They were ahead for so long then the likes of Roborock just left them in the dirt. I remember the first time I tried one of the Roborock devices, and until then I have been a long time Roomba user (like, 20 years). I just couldn’t believe how much better it was. And iRobot just stubbornly refused to iterate on their fundamental products.
Hello, I'm one of the engineers who worked on the incident. We have mitigated the incident as of 14:43 PT / 22:43 UTC. Sorry for the trouble.
One of the dark consequences of America losing its city-upon-a-hill aspirations is we're less able to effectively call out evil abroad. Jimmy Lai should not have been allowed to this quietly.
As an aside, it's been fascinating reading the comments here about news media.
People want journalists to publish quickly AND only publish what’s fully verified.
They want anonymous sources named "in the spirit of truth," without grappling with the reality that doing so would instantly dry up anyone risking their job, or worse, to provide information.
They expect journalists to release raw information as soon as they have it, while simultaneously acting as perfect filters; never amplifying rumors, or being wrong, even as new facts emerge.
They want neutrality, except when neutrality conflicts with their priors.
It's no wonder that morale among journalists is at an all-time low. Is any other profession held to such an impossible standard?
>Running npm install is not negligence. Installing dependencies is not a security failure. The security failure is in an ecosystem that allows packages to run arbitrary code silently.
No, your security failure is that you use a package manager that allows third-parties push arbitrary code into your product with no oversight. You only have "secutity" to the extent that you can trust the people who control those packages to act both competently and in good faith ad infinitum.
Also the OP seemingly implies credentials are stored on-filesystem in plaintext but I might be extrapolating too much there.
A few things I've come to personally believe after spending years developing web and robotics software in Python/JavaScript then spending years having to maintain while constantly adding new features and dealing with company pivots:
- The types exist whether you write them down or not.
- If they're not written down, they're written down in your head.
- Your head is very volatile and hard for others to access.
- Typing is an incredibly good form of documentation.
- JSDoc and TypeScript are standards/formats for typing. Like any tools, they both have advantages and disadvantages. Neither is objectively better than the other.
- Make informed decisions on how you'll describe your types, and then be consistent and unsurprising.
- A type checker is the computer saying, "okay then, prove it" about your program's type validity.
- Not every program benefits from the same amount of "prove it."
- Too much can be as bad as too little. You're wasting resources proving throwaway code.
- I like languages that let you decide how much you need to "prove it."
I really like this author's summary of the 1983 Bainbridge paper about industrial automation. I have often wondered how to apply those insights to AI agents, but I was never able to summarize it as well as OP.
Bainbridge by itself is a tough paper to read because it's so dense. It's just four pages long and worth following along:
https://ckrybus.com/static/papers/Bainbridge_1983_Automatica...
For example, see this statement in the paper: "the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have."
This summarizes the first irony of automation, which is now familiar to everyone on HN: using AI agents effectively requires an expert programmer, but to build the skills to be an expert programmer, you have to program yourself.
It's full of insights like that. Highly recommended!
I'm a co-founder at WonderProxy, we didn't make their list (we target people doing application testing, not consumer VPNs).
We're in 100+ countries, and I'll stand by that claim. It's a huge pain in the neck. In our early years we had a lot of problems with suppliers claiming to be in Mexico or South America who were actually just in Texas. I almost flew to Peru with a rackmount server in my luggage after weeks of problems, that plan died when we realized I'd need to figure out how to pay Peruvian income tax on the money I made in country before I could leave.
We've also had customers complaining that a given competitor had a country we'd had trouble sourcing in the Middle East. A little digging on our part and it's less than a ms away from our server in Germany.
The link in the last paragraph provides some data to back up the claim. https://mikelovesrobots.substack.com/p/wheres-the-shovelware... - If the goal is to increase the rate of software production, there isn't much evidence that AI has moved the needle.
Sure, code gen is faster now. And the industry might finally be waking up to the fact that writing code is a small part of producing software. Getting infinitely faster at one step doesn't speed up the overall process. In fact, there's good evidence it that rapid code gen actually slows down other steps in the process like code review and QA.
You should consider dropping that instinct. If you look into how corporations have behaved historically you'd assume evil until proven innocent. Especially US corps.
Thank you! Opening an incident as soon as user impact begins is one of those instincts you develop after handling major incidents for years as an SRE at Google, and now at Anthropic.
I was also fortunate to be using Claude at that exact moment (for personal reasons), which meant I could immediately see the severity of the outage.
American tech companies have been pushing the needle on privacy ever since Google. Then Facebook. They've gradually normalised that privacy does not exist, all for their own capital gain.
There are European alternatives but they need support.
IMHO it requires conscious choices by European citizens to choose more carefully which online services they dedicate their time and money to. Or expect unintended consequences.