Top Comments | Hacker News

18 min read Original article ↗


Dear Cliff,

I'm terribly sorry to hear of your passing, but am pleased that you have since gotten better.

Cheers!


Thank you for the update, Cliff. I will update your Wikipedia page to show that your death is currently under dispute.


> The packages for departing employees will include the equivalent of their full base pay through the end of 2026. Healthcare coverage is different across the globe, and if you’re in the United States, we’ll continue to provide support through the end of the year. We are also vesting equity for departing team members through August 15th, so they receive stock beyond their departure date. And, if departing team members haven’t hit their one-year cliffs, we are going to waive those and vest their pro-rated equity through August as well.

The announcement reads as pretty heartless to me, but this is a very, very nice departure package


Welp, looks like I’m affected. If anyone is looking to hire a systems engineer with distributed systems and load balancing experience, shoot me an email at <anything>@piperswe.me :/

I’ll update this with a resume link tonight…


Perspective from the trenches: I teach at a university that uses Canvas. We are in our final exams period right now.

We got our first email (from Academic Affairs) notifying us that it was down at 5:17pm EDT this afternoon, with little info; followup emails were sent at 6:24 and 6:57 with more info, but mostly about how we would be compensating for it and not about what actually was going on (other than, "nationwide shutdown" and "cybersecurity attacks", no further detail). I don't get a sense that they know much more than that, not that I would expect them to.

A perhaps telling detail: they're instructing us to have students email us directly with any work that had been submitted via Canvas. That suggests that they have no particular confidence that it will come back up soon.

I personally am only slightly affected; as a CS professor a lot of my students' work is done on department machines, and submitted that way, and I do the actual exams on paper. More importantly, I've never liked or trusted Canvas's gradebook, and so although I do upload grades to Canvas so students can see them, my primary gradebook is always a spreadsheet I maintain locally.

But I have a lot of colleagues for whom this is catastrophic at a level of "the whole building burnt down with all my exams and gradebooks in it"---even many of those that teach 100% in person have shifted much or all of their assessment into Canvas (using the Canvas "quiz" feature for everything up to and including final exams), and use the Canvas gradebook as their source-of-truth record. We've been encouraged to do so by our administration ("it makes submitting grades easier"). For faculty in that situation, they have few or zero artifacts that the students have produced, the students themselves don't have the artifacts to resubmit via email because they were done in Canvas in the first place, and they have no record of student grades or even attendance (because they managed that all inside Canvas). I guess they have access to the advisory midterm grades from March, if they submitted them (most do, some don't), but that might be it.

My gut feeling on this is that this is either resolved in hours (they have airgapped backups and can be working as soon as they can spin up new servers), or weeks (they don't). Very little in-between. And if that's true and we wake up tomorrow with this unresolved, I really have no idea what a lot of professors at my university and across the country are going to do to submit grades that are fair and reasonable. In the extreme case, they may have to revert to something we did in the pandemic semester (and before that, at my school, in the semester that two major academic buildings actually did burn to the ground a week before finals): let classes that normally count for a grade just submit grades as pass-fail. Because what else can you do?

(Well, one thing you can do is not put your eggs all in one basket, and not trust "the cloud" quite so much, but that ship's already sailed. I do wonder if in the longer term, anybody learns any lessons from this....)

UPDATE: As of 11:45pm EDT, my university's canvas instance is up and running! Here's hoping it stays (but I'll be downloading some stuff just in case...)


My understanding is that this new reCAPTCHA is basically just remote attestation.

Remote attestation doesn't use blind signatures (as that would be 'farmable') so tying the device to the 'attestee' is technically possible with collusion of Google servers: EK (static burned-in private key) -> AIK (ephemeral identity key in secure enclave signed by a Google server) -> attestation (signed by AIK). As you can see if the Google server logs EK -> AIK conversions an attestation can be trivially traced to your device's EK. This is also why we don't really see and probably never will see online services which offer fake remote attestations, as it will be pretty obvious that the next step of running such a service is getting Google as a customer and having all your devices blacklisted. Private farms probably won't last long either as I'm sure Google logs everything and will correlate.

Unless something special is done with this new reCAPTCHA not only are you locking internet services behind TPM chips but you are also surrendering anonymity to Google. Unless you acquire untraceable burners for every service, the new reCAPTCHA will be technically capable to tying all your accounts across all these services together. Much like age verification. It may appear that the service would need to cooperate to link the reCAPTCHA session to your registration but the registration time alone will likely be sufficient (the anonymity set will be all but destroyed).


This is surprisingly common.

The security of UUIDv4 is based on the assumption of a high-quality entropy source. This assumption is invalidated by hardware defects, normal software bugs, and developers not understanding what "high-quality entropy" actually means and that it is required for UUIDv4 to work as advertised.

It is relatively expensive to detect when an entropy source is broken, so almost no one ever does. They find out when a collision happens, like you just did.

UUIDv4 is explicitly forbidden for a lot of high-assurance and high-reliability software systems for this reason.


This really sucks. I loved this job. I'm an EM and I was trying to hire more people because we're so busy with everything we needed to do. My teams products are something like 95% profit.

Really going to miss my team, they were wonderful to work with. Secretly hoping they'll have to rehire.

I refuse to believe it was about AI. Coming from the inside, the bottleneck was never code. Seeing who is being laid off, especially on my team, it's the people who make things run.


I have largely written Reddit off and no longer visit it after an experiment I did where I had an agent karma farm for me and do some covert advertising. As I went through the posts it wrote I realized that as a reader I would have NO idea that these were just written by a computer. Many many people (or other bots) had full on conversations with it and it scared me a bit.

I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.

Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.


I’ve seen managers hiring people with an intent to lay them off when winds change to protect themselves and their close circle. I can only imagine they’ve had great KPIs in both cases: first for scaling the team, and then for cutting costs.


1000% agree. I am increasingly hesitant to believe Anthropic's continual war drum of "build for the capabilities of future models, they'll get better".

We've got a QA agent that needs to run through, say, 200 markdown files of requirements in a browser session. Its a cool system that has really helped improve our team's efficiency. For the longest time we tried everything to get a prompt like the following working: "Look in this directory at the requirements files. For each requirement file, create a todo list item to determine if the application meets the requirements outlined in that file". In other words: Letting the model manage the high level control flow.

This started breaking down after ~30 files. Sometimes it would miss a file. Sometimes it would triple-test a bundle of files and take 10 minutes instead of 3. An error in one file would convince it it needs to re-test four previous files, for no reason. It was very frustrating. We quickly discovered during testing that there was no consistency to its (Opus 4.6 and GPT 5.4 IIRC) ability to actually orchestrate the workflow. Sometimes it would work, sometimes it wouldn't. I've also tested it once or twice against Opus 4.7 and GPT 5.5; not as extensively; but seems to have the same problems.

We ended up creating a super basic deterministic harness around the model. For each test case, trigger the model to test that test case, store results in an array, write results to file. This has made the system a billion times more reliable. But, its also made the agent impossible to run on any managed agent platform (Cursor Cloud Agents, Anthropic, etc) because they're all so gigapilled on "the agent has to run everything" that they can't see how valuable these systems can be if you just add a wee bit of determinism to them at the right place.


This was always a nightmare waiting to happen. The sheer mass of packages and the consequent vast attack surface for supply chain attacks was always a problem that was eventually going to blow up in everyone's face.

But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.

Well, now we're reaching the "find out" part of the process I guess.


I run a niche creative community, and we outlawed AI-generated content in 2022 as it was easy to see how corrosive it would be to the community.

It hasn't been easy. We ban fake AI accounts daily and shrug off around 600 AI content creator accounts monthly.

It's a lot of work, extra work that wasn't needed before AI content came around, and of course, that is an extra cost.

I fear losing the battle.


I work at Mozilla; I fixed a bunch of these bugs.

In general, I would say that our use of "vulnerability" lines up with what jerrythegerbil calls "potential vulnerability". (In cases with a POC, we would likely use the word "exploit".) Our goal is to keep Firefox secure. Once it's clear that a particular bug might be exploitable, it's usually not worth a lot of engineering effort to investigate further; we just fix it. We spend a little while eyeballing things for the purpose of sorting into sec-high, sec-moderate, etc, and to help triage incoming bugs, but if there's any real question, we assume the worst and move on.

So were all 271 bugs exploitable? Absolutely not. But they were all security bugs according to the normal standards that we've been applying for years.

(Partial exception: there were some bugs that might normally have been opened up, but were kept hidden because Mythos wasn't public information yet. But those bugs would have been marked sec-other, and not included in the count.)

So if you think we're guilty of inflating the number of "real" vulnerabilities found by Mythos, bear in mind that we've also been consistently inflating the baseline. The spike in the Firefox Security Fixes by Month graph is very, very real: https://hacks.mozilla.org/2026/05/behind-the-scenes-hardenin...



This has been a very long time coming and the crackup we're starting to see was predicted long before anyone knew what an LLM is.

The catalyst is the shift towards software transparency: both the radically increased adoption of open source and source-available software, and the radically improved capabilities of reversing and decompilation tools. It has been over a decade since any ordinary off-the-shelf closed-source software was meaningfully obscured from serious adversaries.

This has been playing out in slow motion ever since BinDiff: you can't patch software without disclosing vulnerabilities. We've been operating in a state of denial about this, because there was some domain expertise involved in becoming a practitioner for whom patches were transparently vulnerability disclosures. But AIs have vaporized the pretense.

It is now the case that any time something gets merged into mainline Linux, several different organizations are feeding the diffs through LLM prompts aggressively evaluating whether they fix a vulnerability and generating exploit guidance. That will be the case for most major open source projects (nginx, OpenSSL, Postgres, &c) sooner rather than later.

The norms of coordinated disclosure are not calibrated for this environment. They really haven't been for the last decade.

I'm weirdly comfortable with this, because I think coordinated disclosure norms have always been blinkered, based on the unquestioned premise that delaying disclosure for the operational convenience of system administrators is a good thing. There are reasons to question that premise! The delay also keeps information out of the hands of system operators who have options other than applying patches.


Feudal Japan had a measurement called the "koku", which is roughly the amount of rice needed to feed a person for a year: about 330 lb. You can now buy 50 lb. of rice at Costco for $30, which is a few hours of work at minimum wage.

To me, that is a modern marvel. I don't want people to buy things that they don't need, and I also don't like the crowds, but I can't help but feel grateful for a stocked grocery store that is accessible to basically everyone—isn't that the dream?

https://en.wikipedia.org/wiki/Koku


Funny story no one will believe, but it’s true. A good friend of mine joined a startup as CTO 10 years ago, high growth phase, maybe 200 devs… In his first week he discovered the company had a microservice for generating new UUIDs. One endpoint with its own dedicated team of 3 engineers …including a database guy (the plot thickens). Other teams were instructed to call this service every time they needed a new ‘safe’ UUID. My pal asked wtf. It turned out this service had its own DB to store every previously issued UUID. Requests were handled as follows: it would generate a UUID, then ‘validate’ it by checking its own database to ensure the newly generated UUID didn’t match any previously generated UUIDs, then insert it, then return it to the client. Peace of mind I guess. The team had its own kanban board and sprints.


I dislike the title because it doesn't clearly state it's a layoff. "Building for the future" gave me the impression that it's about some major new initiative with a roadmap outlining plans.


I live in Poland. This headline is misleading. Poland didn't build a top-20 economy. Western Europe and the US built their economy in Poland, because the labor is educated and cheap.

There are almost no globally competitive Polish companies. The "growth" is branch offices of German and American corporations taking advantage of engineers who'll work for 40% of Berlin rates. Remove the foreign-owned sector and you're looking at a mid-tier economy running on EU structural funds.

It's a great place to live, genuinely. But calling this "Poland's economy" is like calling a McDonald's franchise "your restaurant"


* I'm not in that city.

* It's running a kind of Chrome on a kind of Linux, at a stretch.

* Nobody can infer when I work and when I sleep. That includes me.

* The recent, high-end display is the screen of a low-end tablet I bought in a supermarket five years ago.

* But yes, browser fingerprinting is annoying.

* Since you can detect light mode, would it kill you to honor it?


In case people no longer remember, when China started to require websites to register for a license before be allowed to operate, it was for "protecting the children" too.

This simple policy then goes on to silence most individual publisher(/self-media) and consolidated the industry into the hands of the few, with no opportunity left for smaller entrepreneurs. This is arguably much worse than allowing children to watch porn online, because this will for sure effect people's whole life in a negative way.

Also, if EU really wants "VPN services to be restricted to adults only", they should just fine the children who uses it, or their parent for allowing it to happen. The same way you fine drivers for traffic violation, but not the road.

And if EU still think that's not enough, maybe they should just cut the cable, like what North Korea did.


The only effective punishment/threat that I saw work on my bullies at school was the threat to remove one of them from the football team and prevent him from playing for the school. He turned it around and was ok after that.

It was highly effective because it was a bigger punishment than those used for not doing your homework, and because it was highly relevant to him specifically. It worked because we had 16 students to a class (I was very privileged to be there) and teachers who gave a crap and put the time in to understand the problem and think of potential solutions, rather than just apply generic policy.

The problem is that most schools don't do that, would likely argue they don't have time to do that, and also probably spend a fair amount of resources and time on relatively ineffective bullying prevention.


I love the polish, but credit where credit is due:

„Poland is the largest beneficiary of EU funds 2014-2020, with one in four euro going to Poland“

https://www.gov.pl/web/funds-regional-policy/poland-at-the-f...

Update: The comments below this are strange.

I ment: „Poland gets money, Poland transforms it into more money”.

Is Poland more efficient in it than other countries? I do not know. Would Poland have generated less money without it ? Probably? Is an annual investment of the 2-3%of the GDP into a country a lot? I think so?


"We are our own most demanding customer. Cloudflare’s usage of AI has increased by more than 600% in the last three months alone. Employees across the company from engineering to HR to finance to marketing run thousands of AI agent sessions each day to get their work done. That means we have to be intentional in how we architect our company for the agentic AI era in order to supercharge the value we deliver to our customers and to honor our mission to help build a better Internet for everyone, everywhere."

As an English enthusiast, I'm getting very frustrated at how the language is consistently abused in executive communications to write words without saying anything.

The implication that is NOT said is that suddenly 20% of people were sitting around without any work to do because AI was making everyone so efficient and productive. This does not, however, seem to be the reality, based on conversations within the company. It appears we have yet another case of economic downturn disguised as increasing velocity.


I am a physics professor and often use Gemini to check my papers. It is a formidable tool: it was able to find a clerical error (a missing imaginary unit in a complex mathematical expression) I was not able to find for days, and it often underlines connections between concepts and ideas that I overlooked.

However, it often makes conceptual errors that I can spot only because I have good knowledge of the topic I am discussing. For instance, in 3D Clifford algebras it repeatedly confuses exponential of bivectors and of pseudoscalars.

Good to know that ChatGPT 5.5 Pro can produce a publishable paper, but from what I have seen so far with Gemini, it seems to me that it is better to consider LLMs as very efficient students who can read papers and books in no time but still need a lot of mentoring.


We will know when aliens are here when a new Polymarket account bets $10M on "aliens about to be discovered".


Whether it's AMP or manifest 3 or android source shenanigan or attempts to replace cookies with their FLOC nonsense or this...Google is rapidly turning into a malicious force when it comes to the open internet