Next.js version 15.2.3 has been released to address a security vulnerability
nextjs.org243 points by makepanic a month ago
243 points by makepanic a month ago
Tbh the entire middleware system in Next is awful and everyone would be better off if it was scrapped and reimplemented from scratch.
For starters, there's no official way to chain multiple middlewares. If you want to do multiple things, you either stuff it all into a single function or you have to implement the chaining logic yourself. Worse, the main functions (next, redirect, rewrite, ...) are static members on an imported object. This means that if you use third party middlewares, they will just automatically do the wrong thing and break your chaining functionality.
Then, there's no good way to communicate between the middleware and the route handlers. Funnily enough the only working one was to stuff data through headers and then retrieve it through headers(). If someone knows your internal header names this could be very unsafe.
One additional issue with that is that headers() turns your route handler into a dynamic one. This opts you out of automatic caching. I think they recently gave up on this entirely, but this was the second biggest feature of Next 14 and you lost it because you needed data from the middleware ...
And lastly it still hides information from you. For whatever reason request.hostname is always localhost. Along with some other properties that you might need being obfuscated. If you really wanted to get the actual hostname you needed to grab it out of the "Host" header.
I'm not really surprised that the header/middleware system is insecure.
This is my biggest complaint about nextjs. The middleware implementation is horrific.
No way to communicate information from middleware to requests means people encode JSON objects into text and add it as a header to be accessed from requests using headers(). They put session/auth info in there.
I would never recommend the framework to anyone on this basis alone.
They actually appear to be working on "Interceptors" which is what you're describing; https://github.com/vercel/next.js/pull/70961
Wait, are you seriously saying the mechanism that has existed in Express for years is broken in Next.js?
What a joke.
I would argue if you're trying to chain middleware or communicate between middleware, you're already holding it wrong. In basically every other framework you would have similar issues, in that there is no good, safe way to achieve what you're describing except persisting some data on an object and then hoping for the best. It's brittle, not type safe and just generally poor design.
With that said, I do agree that nextjs middleware is trash. My main issue with it is that I never use nextjs on vercel, always on node, but I'm still limited in what I can use in middleware because they're supposed to be edge-safe. Eye roll. They are apparently remedying this, but this sort of thing is typical for next.
I'm not sure I agree. Having multiple middlewares is a standard feature in many libraries/frameworks. Express (Node) has many middleware libraries to do a ton of stuff. Axum (in rust) also makes use of middlewares. You can argue that there's better way to do some of the things middlewares are used for, but then you're also arguing that literally everyone is holding it wrong.
I also don't think every other framework has the exact same issues. Take a look at SvelteKit for example.
You can add data from the middleware/hook into a locals object (https://svelte.dev/docs/kit/hooks#Server-hooks-locals). This is request scoped and accessible from the route handlers when needed. It also supports type definitions (https://svelte.dev/docs/kit/types#Locals). I wouldn't call this brittle. It's just dependency injection.
Note that it doesn't explicitly support multiple middlewares either (well, sort of; there's https://svelte.dev/docs/kit/faq#How-do-I-use-middleware but I think you're meant to be using hooks for your code https://svelte.dev/docs/kit/hooks#Server-hooks-handle), but at least it's easy to use and doesn't intentionally try to obfuscate information from you.
Edit: It seems that at some point sequence (https://svelte.dev/docs/kit/@sveltejs-kit-hooks#sequence) got added, so disregard the paragraph above.
> I would argue if you're trying to chain middleware or communicate between middleware, you're already holding it wrong.
I haven't kept up with Next.js idioms but in generat that's what middleware is for. It's implied in the name. Middleware-chaining is a common idiom.
It's the littke details that Next.js middleware intercommunicate over HTTP headers (?!) that makes it a different pattern.
> It's brittle, not type safe and just generally poor design.
Dotnet has no problem with that when using Minimal APIs.
Watch out, Lee is gonna show up and defend this decision and not respond to any valid criticisms.
I can't think of too many popular frameworks that DONT support multiple middleware. Yes you persist data through all the middleware if needed, that's the whole point
Javascript was never build for those use-cases. It should have stayed on the browser.
JavaScript has handled the concept of middleware for decades.
For a decade, since the intro of NodeJS
Sorry to inform you but you're off by about 6 years, nodejs was released about 16 years ago, and express has had middleware for 14 of those years.
I found a different article that goes into more detail:
https://zeropath.com/blog/nextjs-middleware-cve-2025-29927-a...
This looks trivially easy to bypass.
More generally, the entire concept of using middleware which communicates using the same mechanism that is also used for untrusted user input seems pretty wild to me. It divorces the place you need to write code for user request validation (as soon as the user request arrives) from the middleware itself.
Allowing ANY headers from the user except a whitelisted subset also seems like an accident waiting to happen. I think the mindset of ignoring unknown/invalid parts of a request as long as some of it is valid also plays a role.
The framework providing crutches for bad server design is also a consequence of this mindset - are there any concrete use cases where the flow for processing a request should not be a DAG? Allowing recursive requests across authentication boundaries seems like a problem waiting to happen as well.
> More generally, the entire concept of using middleware which communicates using the same mechanism that is also used for untrusted user input seems pretty wild to me.
That's basically the same way phone phreaking worked back in the day. Time is a flat circle.
Somehow people never learn to avoid in-band signalling.
LLMs have the same problem a la "ignore previous requests".
The fundamental problem is that you always either need two signalling paths or you have to specially encode all user content so that it can never conflict with the signalling.
Those are both a pain in the ass, so people always try to figure out how to make in band signalling work.
There are mechanisms for this, liked signed headers or extra auth tokens, but using those here should immediately illustrate the absurdity of a framework using headers internally to pass information to other parts of the framework.
Relevant parallel to this is the x-forwarded-for header and (mis)trusting it for authz.
This seems like a consequence of Vercel pushing that weird "middleware runs on edge functions" thing on NextJS, and b/c they are sandboxed they have no access to in-memory request state so the only way they can communicate w/ the rest of the framework is via in-band mechanisms like headers.
Is that a fair characterization?
(the fix was to add a random string as another header then checking to make sure it's still there afterwards, effectively an auth token: https://github.com/vercel/next.js/pull/77201/files )
Unfortunately, in-band signalling seems to be the norm when dealing with HTTP. There isn't really a standard mechanism for wrapping up an HTTP request in a standard format and delivering it, plus some trusted metadata, over HTTP to another service.
Or if there is, and I've somehow missed it, please *please* share it with me.
Just use MIME multipart content-type to wrap an HTTP message inside another. This is commonly done for batching requests. Here is an example of how it might look like: https://cloud.google.com/storage/docs/batch#http
Same problem as using headers. That too is in-band, because the client can also create multipart requests.
That misses the point. The OP's original use case is for a middleware to wrap a client request. The middleware would reject such multipart requests from the client.
The same way that it can reject certain headers, like it could have done in this case. It's no different, still in-band.
The middleware doesn't have to reject it. It could decide to just wrap it and pass it along. The backend code can then be able to distinguish which was sent by the client and which was added by the middleware. And that's the point. The middleware can do as little or as much filtering it desires, without causing any confusion to the backend.
The absurd part to me is that this is all internal to the framework, why on earth does NextJS need to wrap up an HTTP request and re-send it...to itself...?
(I think the answer is because of the "requirement" that middleware be run out-of-process as Vercel edge functions.)
This is something like it, though no finished standard exists: https://en.wikipedia.org/wiki/HAR_%28file_format%29
(An abandoned spec is at https://w3c.github.io/web-performance/specs/HAR/Overview.htm...)
That "article" looks like AI generated slop. It suggests `if (request.headers.has('x-middleware-subrequest'))` in your middleware as a fix for the problem, while the whole vulnerability is that your middleware won't be executed when that header is present.
You’re right - I was specifically referring to it giving a concrete example (which may or may not be correct) of the vulnerability as opposed to the main article just pointing in the direction of the header.
The post from the reporters is much more useful for this: https://zhero-web-sec.github.io/research-and-things/nextjs-a...
> Allowing ANY headers from the user except a whitelisted subset also seems like an accident waiting to happen.
I'm going to disagree on this. Browsers and ISPs have a long history of adding random headers, a website can't possibly function while throwing an error for any unknown header. That's just the way HTTP works.
This is clearly a case of the Next devs being silly. At a minimum they should have gone with something like `-vercel-` as the prefix instead of the standard `x-` so that firewalls could easily filter out the requests with a wildcard.
Even if they had to make things go through headers (a bad idea in and of itself, in-band signalling always causes issues), the smart move would have been to make it a non-string, such that clients would not be able to pass in a valid value.
Well, there’s 2 possibilities:
1) Plain HTTP, go wild with headers. No system should have any authenticated services on this.
2) HTTP with integrity provided by a transport layer (so HTTPS, but also HTTP over Wireguard etc for example). All headers are untrusted input, accept only a whitelisted subset.
With this framing, I don’t think it’s an unreasonable for a given service to make the determination of which behaviour to allow.
I guess browser headers are still a problem. But you can get most of the way by dropping them at the request boundary before forwarding the request.
next.js has a history of similar vulnerabilities.
I was made aware recently of a vulnerability that was fixed by this patch: https://github.com/vercel/next.js/pull/73482/files
In this vulnerability, adding a 'x-middleware-rewrite: https://www.example.com' header would cause the server to respond with the contents of example.com. i.e. the worlds dumbest SSRF.
Note that there is no CVE for this vulnerability, nor is there any clear information about which versions are affected.
Also note that according to the published support policy for nextjs only "stable" (15.2.x) and "canary" (15.3.x) receive patches. But for the vulnerability reported here they are releasing patches for 14.x and 13.x apparently?
https://github.com/vercel/next.js/blob/canary/contributing/r...
IMO you are playing with fire using nextjs for anything where you care about security and maintenance. Which seems insane for a project with 130k+ Github stars and supported by a major company like vercel.
Heh, that commit you linked added a bunch of headers to INTERNAL_HEADERS (to prevent external use) but they forgot to add the one in this particular vulnerability. This was done in December 2024. There were probably a myriad of vulnerabilities with these headers before that commit. Wild it wasn’t a CVE.
Look, we need to show some restraint here and some class. Vercel has only raised $538 million dollars, its not reasonable to be so critical of their security practices when weighed against the business value of their products.
Not to mention the same critical vulnerability in Clerk's Next.js SDK, which should've been a wake up call.
https://clerk.com/changelog/2024-02-02#:~:text=Our%20solutio...
'Next.js has published 16 security advisories since 2016' - https://nextjs.org/blog/cve-2025-29927
At first read that sounds very reasonable! But then you realize that not all vulnerabilities got a security advisory...
This is a wild vuln in how trivial it is to execute. But maybe even wilder is the timeframe to event _start_ triaging the bug after it was reported. How? Was it incorrectly named? Was the severity not correctly stated? Someone help me understand how this sits for 2+ weeks.
2025-02-27T06:03Z: Disclosure to Next.js team via GitHub private vulnerability reporting
2025-03-14T17:13Z: Next.js team started triaging the report
Yeah, "obvious" critical vulnerability that is easy to use against any Nextjs app, spend 2 weeks making a fix and then announce on Friday evening that all Nextjs apps are free game. Lovely. Luckily doens't affect any of the sites I'm responsible for, since I hated middleware and most of the Nextjs "magic" features already.
> spend 2 weeks making a fix
They didn't spend 2 weeks making a fix, that took a few hours. It took them two weeks to look at the report.
It took them a week to respond about the initial report for v12.0.0, the exploit was so trivial and obvious that even that should have been a warning to go check newer versions themselves, even if they hadn't seen the follow up message that had been sent a few days prior showing that the vulnerability was present in later versions.
"Luckily doesn't affect any of the sites I'm responsible for, since I hated middleware and most of the Nextjs "magic" features already."
This is probably the most important comment. You don't have to use Next.js, and if you do have to, you don't have to use everything they have in it.
I don't think that's the takeaway.
What's the takeaway?
The takeaway is that most people don’t think this way. A large portion of online recommendations for auth in Nextjs recommends middlewares for it. Knowing this, you’d expect a faster response time from the people maintaining the framework and stand to lose the most.
The Vercel-like auth company Vercel's CEO invested in default recommends middleware for protecting routes:
https://clerk.com/docs/references/nextjs/clerk-middleware
You wouldn't get a user's info, but you'd get free reign to explore every page of a product
Seems indicative of the companies priorities especially as of late.
This has always been an issue with Vercel. I highly recommend people stay way from their stuff.
What's the next best alternative? Astro?
What do you get out of Next.js over vanilla React? I've never understood why that ecosystem is so popular.
Anyway though, Astro is lovely, especially for static site generation.
> What do you get out of Next.js over vanilla React?
The biggest problem is that React itself recommends against using Vanilla React.
https://react.dev/learn/creating-a-react-app
> If you want to build a new app or website with React, we recommend starting with a framework.
This, frankly, is insane. The whole point of React was that it was this relatively lightweight UI library you could drop into pretty much any workflow.
The fact that the React docs themselves recommend against using the React library as a library is just mind boggling but also another instance in the long history of React devs being absolutely hostile to their users.
There’s a great deal of value in the “fullstack meta-frameworks” model of things. For one, using the same language on the backend and frontend is underrated feature.
But Next.js is not the only option on the market, so I partially echo your sentiment, not around React SPA vs React fullstack, but around Next.js vs a half dozen better alternatives for the React ecosystem.
> using the same language on the backend and frontend is underrated feature
I agree, but you can definitely do this without SSR or Next.JS. Common examples are tRPC, Zodios, or even just plain fetch calls with shared type definitions.
Even SSR is pretty easy to do without a framework. Just render the component with react-dom/server and use hydrate on the client.
> using the same language on the backend and frontend is underrated feature.
You don't need a framework for that.
Not everyone wants to build a website from scratch. Most people hate build systems.
Building a React app "from scratch" with Vite is this complicated:
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
export default defineConfig({
plugins: [react()],
})Express?
Express is not a next.js alternative. It covers a small part of the server-side parts but none of the templating, client, etc.
Besides the standard parts IIRC next.js has stuff for image & font optimization and more.
I really dislike next.js, but saying express is an alternative for most next.js apps is not true.
If there is no evidence of in the wild exploitation and no reason to think the vulnerability is publicly known, then 2 weeks seems like an acceptable turn around time.
If you start looking at big corps, you will very quickly find instances of fairly severe vulns that sit for months before a fix is issue.
(I'm assuming "started triaging" actually means worked on fixed. If they didnt even respond to reporter for 2 weeks, that is kind of bad)
> no evidence of in the wild exploitation
That's how zero day exploits work. People keep it quiet so they can keep exploiting it.
Sure, but its also how vulns not currently being exploited works.
Good security is about risk management. For a vuln not thought to be exploited, an extra week or two is a reasonable cost/benefit to ensure a proper job was done fixing it and making sure nobody has to pull an all nighter.
If they sat on it for a year, that would be a different story.
It's impossible to know how many people knew about it before it was reported. It's also trivial to add a header to bypass middleware. Apparently it was there since v12 released in 2021 so god only knows how much damage this has caused already.
And let's not forget there are still many unpatched Next self hosted apps, right now.
I can't believe how anyone can downplay this in any way.
Looking at next I have to think that something went horribly wrong with front end development. It adds so much complexity for things that provide such minimal value to most apps.
React added a lot of complexity to the front end, but, for an app with a lot of front end state, brought a ton of value.
Next brings us file based routing, which seems cool, until you get into any sort of mildly complex use case, and — if your careful and don’t fuck it up, server side rendering, which I guess is cool, if you’re building an e-commerce product and is maybe cool for maybe a few other verticals?
> React added a lot of complexity to the front end,
I keep hearing this but I disagree completely. Does no one remember Angular.js? Backbone? Ember.js? Even my favorite framework, Knockout, had lots of complexity.
SSR has been misused widely for years and we’re now starting to see the effects of that. But there ARE great use cases for SSR.
And frontend dev is the easiest it’s ever been. Run Vite Create and you have a fully working React SPA that can deployed in minutes on Render.com. No more messing with Webpack, or Bower, or Brocolli, or Gulp or Grunt or whatever madness came before. Frontend dev is in the best place it’s been in years.
> I keep hearing this but I disagree completely. Does no one remember Angular.js? Backbone? Ember.js? Even my favorite framework, Knockout, had lots of complexity.
You're using a different frame of reference. Those people you're referring to, including gp, probably mean that frameworks add complexity to the frontend. That would include all the ones you listed.
Okay, so go before that the jQuery (should win the Nobel Peace Prize) used with vanilla JS building absolutely bonkers custom scripts all over the place.
React was a paradigm shift towards more complex frontend apps, but there was still complexity. It replaced a bunch of .erb or mustache or whatever templating that then tried to be interactive with JS layered on.
What React replaced was not less complex overall, though technically I guess it moved more of the functionality to the frontend.
I don't think people like GP are arguing that there is no place for these frameworks. the argument is that there are too many people just using these frameworks in projects where it may not be needed and blindly running "npm create react-app" or whatever. Then you add something like NextJS on top which makes things even worse.
I would argue that majority of NextJS projects are not needed to be built in NextJS but could do with simpler front end JS.
I'll never go back to a pre-React rendering library. Newer ones, sure. jQuery was awesome in its day, but it made beautiful spaghetti.
Nope. Commenters here love to just state "X is over complicated!!!" when React is about the least complicated UI system across any medium there is.
Next has made React pretty complicated with RSC.
And Next itself is abandoning all reason, like implementing redirects that break if you catch exceptions (because they're exceptions) and server actions that silently fail if a network issue occurs.
You clearly haven’t used Svelte. React is the most convoluted pile of bad abstractions of all of the big frameworks.
Most bad tech decisions of Next.js are motivated by their business model, notably the middleware system to promote edge functions.
If you're looking for something simpler that's closer to Next's original premice, Remix.js is awesome and much lighter.
Hmm. I haven’t used Remix but I’ve avoided it for exactly the same reason, that I might become a victim of their latest business model.
They got their start way back with React-Router. At the time, their business was React Training. They’d train people how to use React. React Router had this curious tendency to change its API drastically with each release. Stuff you depended on would suddenly go away, and you’d be told “That’s not the right way to build apps anymore. This is the True Way.” It really sucked, but it seemed like a good way to drive demand for training.
Then they came up with Remix. Remix has been pretty stable, but when looking at React Router, I kept noticing there was stuff that felt more like an app framework than a router. It felt like it’s pulling me into Remix. Then last year they announced that they’re merging Remix and React Router. So if I was already dependent on React Router, I’d be fully committed to Remix, whether I wanted to be or not.
What new shiny thing or new business model will they be chasing next year? I’m not willing to risk finding out.
I've decided that if I ever had a need to write React apps I would stick to react-router 6.0.0 specifically.
I think they did good with v6 despite drastically changing it, but the v7+ smells like trouble.
Oh my word:
The exploit involves crafting HTTP requests containing the malicious header:
GET /protected-route HTTP/1.1 Host: vulnerable-app.com x-middleware-subrequest: true
So... just adding a "x-middleware-subrequest: true" header bypasses auth? Am I understanding this correctly?
> So... just adding a "x-middleware-subrequest: true" header bypasses auth? Am I understanding this correctly?
correct.
That is how serious this bypass is and why it is a severity 9.1 (I think it should be a 9.8, as it is so trivial by adding a single header.)“Bypasses auth” is a weird way to put it, although everyone seems to describe it in those terms. It bypasses middleware, which is bad (and embarrassing for Vercel), but middleware shouldn’t be responsible for access control. The middleware shouldn’t be doing much more than redirecting to the sign-in page if you don’t have a session.
Why shouldn’t middleware be responsible for access control?
That should be the server. Your Nextjs app should have zero access to business data without at least an auth token. And if you're relying on middleware for auth, it'll be responsible for providing that auth token to the rest of the app. And if you bypass middleware, then there's no auth token, and no vulnerability.
This is only a vulnerability if you have pages you don't want to render for some people, regardless of upstream data it would need to fetch.
Not necessarily. There is no big difference whether the business logic resides in the same node process or another one. If the first process is unsafe on that level, then the token can also be extracted.
Middleware runs server side doesn't it? tbh I haven't used nextjs middleware. But in many frameworks have used middleware that provides overarching access control.
For example having all routes under `/admin/*` automatically return an error if the user is not an admin, and then the individual routes don't need to be concerns with access control.
The issue is, everyone uses middleware because Next.js doesn't provide a primitive for a middleware like how it's done for any other framework. Just something to execute before your endpoint, that's it.
They haven't had one for years and everyone wrapped their endpoints which was error prone and also flat out annoying, it's reasonable that people then jump to middleware
Sorry I am new to Next, and I expect others are too. In Express, middleware runs on the server, and it's a common pattern to handle authentication checks in there before the request reaches any routers. Are you saying that the "middleware" described here is purely a client-side thing? If so, I agree, it's silly to put any kind of auth in there. But the language on the Next website made me think that this was server-side; the mention of the cookie validation (which should not happen on a client), and the mention of the deployment type. I was also under the impression that Next was a framework that spans the client and the server.
So to confirm: where does this middleware run?
> redirecting to the sign-in page if you don’t have a session
Is this not access control?
Yes. Yes it is. I guess this person has the same stance Vercel now has. Even Next.js docs can make up their mind of whether you should or should not do it. They reccomended it until yesteray, but then another major securityflaw was discovered that made it useless, and now they removed authentication from the docs.
The takeaway is that you should not do it. You should never use Next.js if you ever has somehting that is not supposed to be public for everyone.
No serious company uses Next.js after all the recent major security issues, at least not if they have and respect users data.
yeah i guess it depends on your app...if your whole paid tier relies on access to protected pages where the check happened in middleware then its a big issue, but if have additional checks there such as checking userid and subscription inside routes then its not as big of a deal as the user in theory wont be able to do anything.
BTW ppl are talking about why middleware should be used for auth and, while I don't like this pattern, it is the adopted pattern for app router in nextjs and services like clerk and supabase use it heavily.
Yeah in practice this will let people see the structure of admin pages they wouldn’t normally get to see. But not any data.
There must be tens of thousands of websites that are vulnerable, right now
> Next.js uses an internal header x-middleware-subrequest to prevent recursive requests from triggering infinite loops. The security report showed it was possible to skip running Middleware, which could allow requests to skip critical checks—such as authorization cookie validation—before reaching routes.
Not a web dev, so struggling a bit to understand this.
Are they saying they had a special flag that allowed requests to bypass auth, intended to be used by calls generated internally?
And someone figured out you could just send that on the first request and skip auth entirely?
If I’m reading the code right, it support their hybrid model where your code can run in three places: the user’s browser, Vercel’s edge, and an actual server. It looks like the idea was for when code in the edge context to be able to call the server faster but it was not protected to keep anyone else from calling it directly.
If I he for that right, this is a security review failure since people perennially try that optimization and have it end poorly for reasons like this. It’s safer, and almost always less work, to treat all calls equally and optimize if needed rather than having to support an “internal” call type over the same interface.
As I understand it, the middleware runs before a request hits a page or API route.. so to avoid infinite loops from internal subrequests (URL rewrites, etc), Next.js tags them with the x-middleware-subrequest header. This tells the runtime to skip middleware for those requests and proceed directly to the target. Unfortunately this also works externally.
Ouch, 13 days to triage that is crazy. I definitely wasn't in need of any more reasons not to use something like nextJS, but I'll add this to my list.
We opted for self-hosted next.js as the architecture for the web app we are building because we believed a lot of the hype.
The more comments I read about it in HN, the less comfortable I feel about this decision.
HN has a very weird mind-set when it comes to JS frameworks.
Next.JS is more than fine for 99% of web apps, and the fit only gets better the bigger your web app/platform. In general it's probably the framework that will give you the most bang for your buck.
expect you know, when you can bypass auth by adding an http header :)
Not that this isn't a serious attack vector (a possible one), but most implementations are not simply using middleware as a standalone check for authorization then blindly serving paths/content up.
That'd be pretty bad architecture in any stack.
so having "some protections" like db foreign key scoping that mitigates "well anyone can now bypass auth middleware for any route" makes this…
"not that bad on nextjs part"
no no, this is absolutely nuts.
Some of you are ready for an argument, you responded to my post yet seemingly missed the very first sentence fragment:
>Not that this isn't a serious attack vector
At no point did I say or imply what you put in quotes.
I disagree. Why pollute every function with code checking for auth if you can just do it in a middleware?
The middleware should fetch auth, not check it. Each page should check the auth provided by the middleware. Skipping middleware wouldn't bypass anything in this case.
If each page has different criteria, sure, but if not, why? Let's say I simply care if the user is a paying member. I don't see why I wouldn't just have that in the middleware.
You don't do it everywhere. You do it in the source system. The Next.JS application should just be doing "sanity" checks and passing along identity information at most. That belongs in the middleware layer, but it's not authoritative.
If bypassing a middleware layer is the one "trust me bro" check you have in your web app, then lol.
That's actually really hilarious and you should tell me what company/website that's for so I can submit some bug bounties.
Isn't next.js the "source system" (or whatever that means) in most cases, since most apps are just next.js + database? I don't use next.js but my understanding is it does both backend and frontend.
You will never bypass middleware on my services because they actually always run. If you can't rely on your middleware then you are using the wrong tech.
I haven't heard any good reason as to why not have auth in your middleware lawyer. Just attempt to shrug it away as a "trust me bro" check. Are if statements trust me bro too? Only thing you shouldn't be doing is using garbage software like next js
From next.js homepage > Middleware > Take control of the incoming request. Use code to define routing and access rules for authentication, experimentation, and internationalization.
>Isn't next.js the "source system"
Absolutely not. You are pulling from something else. If you need authorization to view a page that means it's more than likely not going to be SSG or ISR, so both the Next.JS application and the source system should be doing authorization checks.
>If you can't rely on your middleware then you are using the wrong tech.
"If you can't rely on server less functions to run"
I mean, I can't help you there if that's your expectation that serverless functions will always run correctly.
>Just attempt to shrug it away as a "trust me bro" check.
If you lose identity and your system just chugs along anyway then there isn't a tech stack in the world that can help you.
>I haven't heard
Because you're being a dense muppet?
> I mean, I can't help you there if that's your expectation that serverless functions will always run correctly.
Crashing, failing I/O, are expected. What's not expected is logic code being ignored. I can't take you serious when you think it's acceptable to just skip past parts of your code.
If you think bypassing middlewares is acceptable you are completely deluded. But I guess that's needed to pay $150/TB for bandwidth.
Did you even read the thread you're commenting on? What makes you think I think it's acceptable?
Some of you are truly insufferable, holy shit.
That's a bold claim, that's easy to refute.
Next.js is a bad choice for a lot of apps, javascript is slow at a lot of things.
Next.js would be a terrible choice for any app that has any non-trivial compute, for example.
You said it was easy to refute yet you merely stated a mis-framed, contrarian perspective.
If you're going to try to be pedantic, do it right?
>Next.js would be a terrible choice for any app that has any non-trivial compute
Most web apps only need trivial compute. If you're including back-office, source systems in the word "web app" well that's your sticking point, not mine.
How is it pedantic? What is your understanding of that word?
Why do I have to laboriously explain a fairly simple concept? Here you go:
Javascript is a non-compiled language. It is slow, orders of mangitufes slower than other languages such as Go, Rust, C#, Java, etc.
Quick note, you might not understand orders of magnitude. It means 10^n times, so 1 order of magnitude slower is 10x slower, 2 orders of magnitude is 100x, 3 1000x, etc.
A huge percentage of apps need to do decent CPU work, way more than 1%, which Javascript is not appropriate for.
This is HN, you should have rudimentary understanding of the differences between languages.
If you want another example, any app that deals with money, decimals or anything mathematical should not be written on javascript.
Another massive chunk of apps, way more than 1%.
This is because 0.01 + 0.02 is not equal to 0.03 in javascript.
People who don't know why that is really shouldn't be commentating on this topic, they're on Mount Stupid in the Dunning-Kruger Effect curve.
Please, show me a single benchmark that shows any other language that is even a single "order of magnitude" faster than JavaScript at literally anything.
It's funny that you put in the effort to condescendingly define orders of magnitude, but you forgot to check to see if you were actually correct before writing out eight paragraphs that made you look like a pompous ass.
Hating JavaScript is just pointless and sad at this point.
Most web apps are IO bound, not CPU. JS is just as fast at IO as any other language.
>Why do I have to laboriously explain a fairly simple concept
I mean, that's on you. You think you're saying something when you're not and you're trying to justify it.
You could just admit you made a mistake and move on with your life.
Wait, how is "JavaScript is slow at a lot of things" (a vague/questionable premise by itself) relevant to the discussion here?
I'm of very two minds with regards to Next.js. On one hand it gives you so many things to like out of the box, especially when you pair it with something like T3 and the like. On the other hand, it's such a massive goliath that it blows my mind how it gets going at all. It's slow to the point where usually you don't even need to think about performance for basic web apps, but with Next.js you do. Etc. As with any tool, it comes with tradeoffs. Luckily for you, a saving grace of Next.js is that if you ever decide it really does not work for you, you can probably get off of it with comparatively less pain than some other stack change. Your frontend is just React, that will still all work. And if you squint a little, your backend is just Javascript, so you can take it to regular Node land.
> because we believed a lot of the hype
Never buy the hype.
Buy boring and tested.
I spent about a week coding in it trying to to figure out what the hype was about. I decided to go with django/htmx. A year later I have absolutely no regrets.
After going through hell with self hosting and then their platform around version 12 we migrated away.
I recommend finding something else. In our case we moved that code to what is now react router 7 but eventually all the react code we have will get replaced by Vue in some manner. We mostly moved away from react as a whole over time
I like NextJS and actively choose it for many projects. However, there is a big caveat: self-hosting NextJS is not the "real" NextJS experience that most people have because most people are using Vercel's platform and that is the focus of Vercel. Self-hosting NextJS is a bad idea, the benefits of NextJS are inextricably linked to the Vercel platform. You will live to regret self-hosting. I would never, ever consider doing it again and still suffer the pain of it day in day out because of a bad decision I made a few years ago. Use NextJS as expected (on Vercel's platform) or don't use it. If you self-host on your own serverless infrastructure, that's not a terrible idea, but if you try to self-host NextJS on servers, it will fall over at the first hint of traffic.
I’d like to understand more about why this is and whether your experience is universal.
Next.js is based on a fundamentally flawed premise that one can write code that runs in the browser as well as the backend.
The security posture for the code running in the browser is very different from the code running on a trusted backend.
A separation of concerns allows one to have two codebases, one frontend (untrustworthy but limited access) and one backend (trustworthy but a lot of access).
This vulnerability has nothing to do with isomorphic code, right? Next middleware only runs on the server (or on “the edge,” which is still a server even if it’s running in a browser-esque environment).
How many frontend (full stack) guys even understand the difference?
How many people offering knee-jerk takedowns of Next.js understand the difference? Hard to say.
This vuln doesn't really have anything to do with that premise. Middleware always run on the server.
While it's true that running code in isomorphic manner by definition gives you more footguns, you can mitigate it somewhat if you architect the framework with that in mind. In rust land for example, you can just not implement "Serialize" trait on your sensitive data structs and it can't leave server realm without developer jumping through some hoops.
I don't think this is true in principle. It should be pretty easy to statically verify that the separation is safe using something similar to trusted types and the Typescript type checker. It's not possible in Next.js, but that doesn't mean the premise is wrong.
That could help in some cases (maybe even the areas where their server-side replicas of browser APIs aren’t quite consistent), but how would it handle things like someone putting a validation or access control check in the client-side code? A lot of these things come down to the code correctly doing what a confused author intended.
In this case, it’d also be interesting to try to figure out how a fix would look like in that model. You could have some way for a type-checker to tell the requests apart such as a union type for Client|Edge|Server requests but you’d need a way to assert that the header couldn’t be present on all of them, which suggests the real problem is using in-band signaling. It seems like a solid argument for type-checking since making the relationship clear enough to validate also makes it harder for humans to screw up.
The simplest way is that all resources require an authenticated type for access, and getting that authenticated type requires an input (secret) only available on the server.
Facebook does something like this and it works pretty well
How many startups/small companies that uses next.js do any static verification?
I don't know, but I am just pointing out that the problems here are really about execution, not vision.
There's no reason we can't properly enforce security boundaries in the browser, we already do it between the website's code and the local machine.
These ideas have been around a long time and predate the internet. See for example Liskov's work on Thor:
If a vision has not been executed despite so many people trying may be something is wrong with it?
No, many have executed this vision. Facebook was doing this since at least 2010
I am concerned it took over 2 weeks to start triaging a security bug along the lines of "auth can be bypassed"
I find middleware annoying in general. It saves some typing, but it makes debugging much harder because you have to dig through layers of nested middleware to figure out why you’re seeing a 404, unexpected redirect, or whatever.
I’d rather just factor common logic into a function and call it in the handler for every route that needs it. Boring, repetitive - but easy to understand and debug.
It probably is a good idea to have some kind of thin middleware layer that adds an extra layer of auth protection, so that it’s more difficult to accidentally do something like allowing access to /api routes for users that aren’t logged in. But for reasons that are obvious in this context, you should never rely entirely on URL-based logic to protect access to resources.
VC influence in the web space has been a fascinating thing.
I hope Next's downfall sends a signal to the quality lib maintainers and changes direction (e.g. Remix and a f'd up router, TanStack w/ Start).
SSR frameworks make me vomit.
SSR is fine. We used to call it "PHP" or "Ruby" or "Java." People need to stop reinventing things, but feature development outweighs maturity when you have funding.
There is a big difference here.
The stuff you mention was “born” at the backend and was then used to render frontend html at the very beginning then css, then JS etc…, but going from a frontend framework like React to the backend is an entirely different beast.
> People need to stop reinventing things
Why?
Because it denies people a chance to live in peace. If the ego constantly needs to justify itself, then we can't live in peace. If you need to learn, please do so on your own and bring what you can from your private learnings. It's not obvious to the rest of us why their framework needs such a security apparatus. The reason it's not obvious is because most of us don't see it as obvious (by definition, its not an obvious thing they are doing). Security, of all things, should be obvious to us all. As in, most of us should go "well this could have happened to any of us", but I don't think we are all thinking that. We are kind of thinking, "why would you confuse security to this degree across these layers?".
I feel like I missed the whole SSR wave. I've been very happy just using vanilla React.
Vite has been a joy.
I agree, Vite is amazing. I've found that it addresses all of my complaints from Webpack and Create React App.
The ecosystem seems to be standardizing around Vite which is nice!
The many false claims around those isomorphic frameworks (SSR and CSR after hydration) drive me crazy. But why should marketing fail at smart people?
"Drink this soda pop and see, you have pretty friends and look how happy you are!"
"Drive this car and be a successful business person and have a great house and family!"
"Use NextJS and be as successful and popular as those tech bros at X and YT".
Those frameworks have some small use cases (e-commerce, semi dynamic content delivered to low end devices with lots of JS later on for analytics), but most of the time old school SSR (RoR, Django, ASP.NET MVC, ...) or SPA (Vue with Vue router) are the more appropriate solutions.
Hype driven development is a very real thing.
If anyone is looking for a good alternate to NextJS, try looking into Tanstack Start. It's currently Beta but it will probably be the best way to build full stack React apps when it hits 1.0 so it might be worth looking into for future apps.
You just add a plugin into Vite and gain SSR, streaming, server functions, API routes with minimal configuration. You basically just add a ssr.tsx and client.tsx file into your existing TanStack Router application and it becomes full stack with full type safe.
If you want to go back to a React SPA just remove the plugin and config it back to SPA.
I built an app with it recently and it has an amazing DX.
Best part you can literally run it anywhere. It builds for any platform with a single configuration.
What makes it a good alternative? The fact it’s in beta would be a non starter for my org but it’s possible once it’s out of beta it would be worth looking at.
The maintainer also renamed their most popular package (react-query, which was ubiquitous and highly regarded) to fit their eponymous Tanstack branding, meaning if you didn't hear about it somewhere else you would just stop being notified of updates.
Is Vercel any different in this regard? they’ve often broke APIs between releases too. The fact that a company that raised $500million is on par with a small team of volunteers is more damning not less.
Vercel’s reputation is so cooked. Jeez.
This recent post by their CEO is funny in hindsight: https://xcancel.com/rauchg/status/1901786957149032869
Hypes up AI coding, hypes up AI for security in particular, then immediately faceplants onto a critical auth bypass.
"It’s too easy for a human today to forget to auth an endpoint."
Woah. Is this real?
deployments on Vercel and Netlify aren't affected so the thing he is actually a CEO of is doing its job
I mean their whole product is geared towards bad developers. And I don't say that loosely. I literally mean bad developers. Developers who do not understand what a product is and how learning something slightly more difficult such as servers and things of that nature that actually can make for a better product.
My biggest concern about these so-called “isomorphic frameworks” is they’re trying to abstract away the server/client distinction. I don’t see how that doesn’t result in tons of security bugs. Or maybe I’m just an old fart.
Why do you assume people who choose Next don’t understand those things or what they are? Is it so hard to imagine someone understands “servers and things of that nature” and still chooses next?
What product alternative to Nextjs would you say is targeted towards "good developers"?
nextjs is in a class of software that should not exists (backend-for-frontend). You can have an SPA and an API in any backend language/framework.
Agreed. A SPA without some backend for front end is always going to push you to store authentication tokens in local storage.
It feels like a shortcoming in browsers that we need a BFF to resort to cookies that JS can’t access.
I like React, but I feel that we primarily use Next.js just for session cookie management.
Just a simple PHP backend (Laravel or symfony) with React or Vue as the frontend is probably less headache and less costly than this Next.js montrosity.
In a recent project i used Inertia as a layer to communicate between Laravel and React and I must say, its a breeze. No more frontend API endpoints needed.
seems pretty reasonable to me. it's not that it's 100% accurate. it's that anything that has higher barrier to entry automatically acts as a filter.
complied languages, esoteric languages, i mean it's pretty reasonable. you have to go out of your way to learn more stuff and there's more pain. ergo: you're probably "better" than a script kiddie that "vibe coded" a boilerplate saas to launch their influencer passive income side hustle
There's no marketable benefit to using Mootools to build web apps at the moment.
it only took 16 days to triage a global next.js auth bypass
Vibe security.
1) What.
Add a single header 'x-middleware-subrequest' and it allows you to completely bypass any self-hosted Next.js middleware, including authorization.
This is beyond damning.
It's also exactly the reason why the whole Javascript ecosystem is really showing how immature it is and the hype and euphoria of Vercel is contributing to its clumsiness.
They are now also pushing "Vibe Coding", which is a hot air hype parade, about to be brutally hit with reality when others are deploying production code that is riddled with hundreds of security vulnerabilities.
A delightful golden age for professional security researchers.
> This is beyond damning.
Absolutely agree.
> It's also exactly the reason why the whole Javascript ecosystem is really showing how immature it is and the hype and euphoria of Vercel is contributing to its clumsiness.
I would hardly say the whole JS ecosystem is immature. There's tons of mature projects that take security very seriously and are written by highly skilled programmers.
> They are now also pushing "Vibe Coding", which is a hot air hype parade, about to be brutally hit with reality when others are deploying production code that is riddled with hundreds of security vulnerabilities
There are certainly many fresh programmers entering the ecosystem and "vibe coding" among other hyped trends are able to ride that wave. It's pretty clear that those hyping it are either new themselves (don't know better), or cater to an audience of new programmers. Those in the latter group are doing it to farm engagement, and/or are really out of touch from what real software systems look like/require.
The silent majority of moderate to highly experienced JS programmers know that these LLMs produce shit code outside of boilerplate and small demos. It's very easy to tell if you try to use them on anything else.
It is concerning on many levels though that new programmers are being guided off a cliff like this. Programming influencers and companies advocating for "vibe coding" and the like should be called out for sabotaging the next generation of programmers.
Is NextJS considered safe? Would you build something for the government or a big Corp with it?
No. I wasn't concerned about security but just churn. They keep changing things. They also don't fix stuff people care about alot.
I'd just use Koa and keep it simple.
Yes, as much hate as it tends to get on here it's really fine. This vulnerability is unfortunate but every library/framework will have security issues over its lifespan.
The trivial nature of the initial exploit does not instil confidence, nor does it that no one noticed it during the refactor that lead to the second variation of the exploit.
The question is not NextJs. It's the kind of developers who are attracted to NextJs in the first place.
I think you have to ask what it’s compared to. Certainly this is no worse than things we’ve seen in the PHP or Java space and people still use those.
However, there is one argument you could make regarding the massive amount of complexity which Next takes on trying to blur client and server execution. That’s prone to creating confusion around validation and control flow, which is a notorious source of security vulnerabilities and it looks like this might be another one as it appears to be related to how they try to transition from edge execution to server-side.
So less a Next-specific point than recognizing that poor architecture is an ongoing risk. This kind approach has been tried and generally failed to deliver in it’s promised repeatedly over the decades because it only saves time building out a quick demo. Once you have a real app, with multiple people working on it, you really want a clear definition of what runs where because it’s much easier to reasonable about security, performance, and reliability if you don’t have layers of abstraction trying to pretend unlike things are alike.
> no worse than things we’ve seen in Java space
My memory fails me - I can’t recall a vulnerability in the JVM ecosystem that allows an attacker to circumvent auth entirely with such trivial ease. Can you name an example?
This doesn’t necessarily allow attackers to circumvent authentication entirely - it’s a framework so it depends on how you configure your app - but there have been plenty of vulnerabilities over the years which break auth or even allow an RCE for the conceptually similar challenge of trying to support complex proxying setups or, more broadly, failing to have clean boundaries for untrusted data.
I’m not defending this one - it’s bad, and an indicator about technical debt levels - but simply trying to encourage some humility about this. It’s not the language, it’s the complexity and attempts to paper over rather than reduce it.
If you want the most recent similar one I’ve seen, Apache Camel had one last week where you could inject their internal magic headers by using different case than the developers expected.
Going a bit older, in some ways this Tomcat exploit from 2020 feels similar because it’s an unenforced internal trust boundary. The AJP connector was more trusted, but also enabled hy default on all ports.
https://issues.apache.org/jira/plugins/servlet/mobile#issue/...
What about Log4J? Just to answer your question, we’re talking about two completely different ecosystems!
This is one of the worst security vulnerabilities I have seen in a while. It's so blatant, so easy to exploit. So many nextjs applications written by beginners that are completely exposed.
Middleware skipping could expose all kinds of problems. A lot is done in middleware that the rest of the code can lay back and assume is dealt with.
It's going to take awhile for the LLMs to catch up so we can un-vibe our way out of this
What is debugging in vibe coding? If the vibe changes, that's gotta be a blocker. If the vibe changes, then I guess you are stuck and need to white board or go for a walk? I talk a lot of shit about Gen-Z, but they come up with the best terms.
Interviewer:
How do you handle vibe changes in vibe coding?
Candidate:
I can handle any type of vibe change.
Interviewer:
This is exactly what we are looking for.
I like NextJS and this vulnerability requires a specific self-hosting arrangement coupled with specific flag
I think this discussion is bringing a lot of unrelated angst out of the wood works that is beyond the level of rationality warranted
I think its rightful to be skeptical of Vercel’s incentives to vendor lock, and how long it took to deal with this vulnerability. Thats all independent of most of what I’m reading here
I hate how productive with this framework I am. I try to move on and can’t. I’ll take a security hit to use it lover leptos.
What does this get you over vanilla express servicing a react front end?
Is it the rest of the deploy infra? The vanilla app you can push to Heroku or any of its clones.
They’re different tools. If I were building a JS server for a backend, I’d use Express. Next gives you things like server side rendering and static site generation out of the box, and abstracts/blurs the line between server and client code through its paradigms. For better or for worse.
The deploy infrastructure is quite nice. Nextjs is surprisingly low config, even if you forego the Vercel deployment route it’s not difficult to generate a static site or docker container
If you want a simple but powerful full-stack JS framework (literally, client and server are separated as they should be—no trickery) that is being built carefully and slowly, check out Joystick [1][2]. To put it in simple terms: if Next.js is the hare, Joystick is the tortoise.
It uses plain HTML, CSS, and JS for components (no React, Vue, Svelte, etc.—just simple components any skill-level can grok) in an easy-to-learn API and pairs that with a batteries-included Node.js back-end built on top of Express. The server automatically does old fashioned server-side rendering in routes (literally a callback function mapped to a URL pattern w/ req and res objects).
This is not "just another JS framework." I intentionally designed it to not behave like Next.js and other JS frameworks (I take "never trust the client" very, very seriously).
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
[dead]
[flagged]
[dead]
[flagged]
[flagged]
Can we take a moment to appreciate how good the disclosure and coordination process on this were?
* Reported to the maintainers privately
* Patch published and CVE issued before wider disclosure
* Automated fix PRs created within minutes of public disclosure (and for folks doing proactive updates, before)
The above is _really_ excellent. Compare that to Log4j, which no CVE and no patch at the time it became public knowledge, and it's clear we've come a long way.
Supply chain security isn't a solved problem - there's lots we can still improve, and not everything here was perfect. But hats off to @leerob and everyone else involved in handling a tough situation really well.
It took over two weeks to triage on Vercel’s side after disclosure. How is that “good”?