[Megathread] Frontier Models Moving to Max Mode for Legacy Team/Enterprise Plans

5 min read Original article ↗

Starting March 16th, all Team and Enterprise accounts still on legacy request-based pricing need to enable Max Mode to access frontier models, including GPT 5.3 Codex, GPT 5.4, Opus 4.5/4.6, and Sonnet 4.5/4.6.

All other models remain unaffected. This change does not apply to individual plans or accounts on our new pricing (introduced with our June 2025 update).

This was communicated to Team and Enterprise admins through email last week, but we’re sharing it here for broader visibility. Enterprise account owners will be contacted separately with account-specific details.

Why are we making this change?

As frontier models become more capable, they run longer, use larger context windows, and consume significantly more tokens per interaction. A single complex request can vary widely in cost. Fixed-per-request pricing no longer reflects reality, so we’re transitioning these models to token-based billing to keep pricing aligned with actual usage. This is the same as our June 2025 pricing update for individual plans.

New posts on this topic will be redirected or merged into this thread. We’ll continue updating this post with FAQs as they come in.


Frequently Asked Questions

Are individual plans affected? No, with the exception of GPT 5.4, which has been Max Mode only for all users since launch. The March 16th change does not impact individual plans.

I’m already on usage-based pricing. Does this affect me? No. This only applies to teams and enterprise accounts still on legacy request-based pricing.

Does Max Mode mean I get the 1M-token context window? Max Mode for legacy request-based plans uses token-based billing rather than fixed requests. The extended context window is a separate option with its own model identifier.

I purchased an annual plan. Does this change mid-year? Your subscription pricing continues for the duration of your billing period. Max Mode changes how frontier model requests are metered. It does not affect your base subscription cost.

4

After installing the latest update, MAX Mode is now forcibly turned on for several of the main models. Does this effectively mean a price increase?
Are you also seeing MAX Mode being forced on, and do you actively use MAX Mode yourself?

7

Thank you. It’s good to know it’s not just me. I’ve started to seriously reconsider whether I should keep using Cursor under this new behavior.

Yepp, same is happening with me, like will this affect the pricing/billing now?

Also please help if there is a way to go back to a earlier version

9

what are cursor alternatives then

12

Here is the answer of the price increase issue.
I think nobody would know it because the thread was deleted almost right away.

13

This change is BS, the least one would expect is a notification via email and enough time for customers to figure things out.

How naive of me to think this could be a bug :smiley:. Enterprise Plan - Models Incorrectly Showing as MAX Plan Required

16

Truly agree with you. cursor team should notice this in there site at least.

My organization started migration to Claude code today because of this

just ridiculous.

20

So has anyone used it? How does it work does it pull requests from the request budget or we can only use the old models for the requests and anything new comes out of the supplemental dollar budget?

21

heard nothing about this from Cursor itself and no indicator in the app last night when the switch was made. went from 1-2 requests per prompt to using 200 in 2 prompts

image

The same case to my team. And, my team must move to Claude, stop using Cursor. There is no choice for my team based on new policy pricing. New pricing policy of cursor is forcing everyone move Claude.

23

This is literally ■■■■■■■■■
I don’t get why we are forced to use Max mode on top of the pricing change.

25

Just want to share this made CURSOR basically USELLESS :wink:

You can see the increase from 2 requests to 50+ and basically the company depleted the limits for one afternoon :slight_smile:

The situation is very similar to @akoske shared above… even just today is basically loosing a ot of money and killing productivity completely.

Basically the options are either to FIX MAX mode because the amount of requests is absurd… and to force it for legacy request based plans is just overkill. Alternatively is to force everyone on legacy plans to switch to token based ones but this takes time and is high likely people to switch to other provider.

26

Request based plans are pretty much useless now, a single prompt can burn up to 1000 requests. The smallest prompts in ask mode with 300k tokens burn 30-50 requests. There must be a bug with MAX mode for using that many requests for no reason.

guys just use api keys for openai, anthropic etc and it will be much cheaper!

28

Sure, let’s go spend even more money on model providers to compensate for what was done here; why didn’t I think of that!

if you want to pay outrages prices keep using cursor as is.