Updates to GitHub Copilot interaction data usage policy
github.blogIf you scroll down to "Allow GitHub to use my data for AI model training" in GitHub settings, you can enable or disable it. However, what really gets me is how they pitch it like it’s some kind of user-facing feature:
Enabled = You will have access to the feature
Disabled = You won't have access to the feature
As if handing over your data for free is a perk. Kinda hilarious.
It’s not so bad, there’s no double negative and it’s not a confusing “switch” that is always ambiguous as to whether it’s enabled or not.
In contrast when you create a a GCS bucket it uses a checkmark for enabling “public access prevention”. Who designed that modal? It takes me a solid minute to figure out if I’m publishing private data or not.
Disabled - You won't have access to this feature of disallowing training.
"<sighs> They could've made this clearer..."
https://old.reddit.com/r/TheSimpsons/comments/26vdkf/dont_do...
I went to check on this and I have everything copilot related disabled and in the two bars that measure usage my Copilot Chat usage was somehow in 2%, how is this possible?
Before anyone comes to me to sell me on AI, this is on my personal account, I have and use it in my business account (but it is a completely different user account), I just make it a point to not use it in my personal time so I can keep my skills sharp.
Does Github count it as copilot chat usage when you use AI search form on their website, I wonder?
I wonder if that’s it! I occasionally do some code search on GitHub and then remember it doesn’t work well and go back to searching in the IDE. I usually need to look into not the main branch because I do a lot of projects that have a develop branch where things actually happen. But that would explain so I guess this is it.
If you're taking about the quota bar. That is only measuring your premium request usage (models with a #.#x multiplier next to the name). If you only use the free models and code completion you won't actually consume any "usage". If you use AI code review that consumes a single request (now). Same with the Github Copilot web chat, if you use a free model, it doesn't count, if you use a premium model you get charged the usage cost.
the framing is so manipulative. "you will have access to the feature" — what feature? the feature of giving away my data? at least be honest and call it what it is. i turned it off immediately but i wonder how many people just leave it because the wording makes it sound like you lose something.
I guess the "perk" is that maybe their models get retrained on your data making them slightly more useful to you (and everyone else) in the future? idk
The feature is that your coding style will be in next models!
I wish my GPL license would transit along with my code.
I said it few years back that code license doesn't exist anymore, some people just haven't realized it yet.
Previously, big tech used to still somehow find loopholes for GPL and licenses still had some value.
Nowadays, It genuinely feels a lot less because there are now services who will re-write the code to prevent the license.
Previously, I used to still think that somewhat non propreitory licenses like the SSPL license etc. might be interesting approaches but I feel like they aren't that much prone to this either now anymore.
So now I am not exactly sure.
If you are wholly confident that model training is a violation of the GPL then go sue.
I guess freedom of study and use may include also training AI, but would be cool if all the derivate work, as AI models and generated code from AI models should be licensed as GPL, layers needed here
Is that not some stock feature-flag verbiage?
Stock dark pattern verbiage...
I'm a little surprised the options aren't "Enable" and "Ask me later".
But it isn't a feature, so using a feature flag is a bit weird.
How is it not a feature from a development standpoint? Colloquially any bit of intended functionality qualifies as a "feature" and certainly any functionality you conditionally enable/disable would be controlled by a "feature flag" regardless.
No, it’s not. Please think like a developer and not like someone playing amateur gotcha journalist on social media. Feature flags are (ab)used in this way all the time. What is a feature? What is a feature flag? It’s like asking what authorisation is vs all your other business rules. There’s grey area.
"Please think like a developer" lmao if I said this to someone at my dayjob I'd be gone.
It's worded that way to create FOMO in the hopes people keep it enabled.
Dark pattern and dick move.
A few days ago, I unchecked it, only to see it checked again when I reloaded the page.
It could be incompetence but it shouldn't matter. This level of incompetence should be punished equally to malice.
Thanks to your comment, I have disabled it now :-)
I agree that it feels like a dart pattern for the most part, makes me want to use codeberg/self hosted git
> On April 24 we'll start using GitHub Copilot interaction data for AI model training unless you opt out. Review this update and manage your preferences in your GitHub account settings.
Now "Allow GitHub to use my data for AI model training" is enabled by default.
Turn it off here: https://github.com/settings/copilot/features
Do they have this set on business accounts also by default? If so, this is really shady.
Ugh, can't believe they made this opt-in by default, and didn't even post the direct URLs to disable in their blog post.
To add on to your (already helpful!) instructions:
- Go to https://github.com/settings/copilot/features - Go to the "Privacy" section - Find: "Allow GitHub to use my data for AI model training" - Set to disabled
I always thought "opt-in" (not "opt in") meant something you have to actively choose to enable; otherwise, it stays off. So calling something "opt-in by default" sounds like a misnomer to me.
But English is not my first language so please correct me if I'm wrong.
You are correct
> can't believe they made this opt-in by default
You can't believe Microslop is force-feeding people Copilot in yet another way?
> and didn't even post the direct URLs to disable in their blog post
You can't believe Microshaft didn't tell you how to not get shafted?
He must be new here
https://github.com/orgs/community/discussions/188488
> Why are you only using data from individuals while excluding businesses and enterprises?
> Our agreements with Business and Enterprise customers prohibit using their Copilot interaction data for model training, and we honor those commitments. Individual users on Free, Pro, and Pro+ plans have control over their data and can opt out at any time.
Aka "they have lawyers and you usually don't, so we think we can get away with it."
only big companies have access to the legal system. nobody else can afford it
> and we honor those commitments.
Ah, so when the inevitable "bug" appears, and we all learn that you've completely failed to honor anything, what will be your "commitment" then? An apology and a few free months?
Time to start pushing for a self hosted git service again.
Yes - not impressed at all that this is opt-in default for business users. We have a policy in place with clients that code we write for them won’t be used in AI training - so expecting us to opt out isn’t an acceptable approach for a business relationship where the expectation is security and privacy.
It is not opt-in by default for business users. The feature flag doesn't show in org policies and github states that it's not scoped to business users.
Gah - you’re right - but given that I don’t use personal copilot - but I do manage an organisation that gives copilot to some of our developers AND I was sent an email this evening making no mention at all of business copilot being excluded it could definitely have been communicated better…
My email does mention it clearly:
> Again, your organization's Copilot interaction data is not included in model training under this new policy, but we are excited for you to enjoy the product improvements it will unlock.
Just confirming, we do not use Copilot interaction data for model training of Copilot Business or Enterprise customers.
You shouldn't do it for public by opt-in, it should be opt-out. But that is the Microslop effect on GitHub, users are an afterthought.
Per their blog post
> Business and Copilot Enterprise users are not affected by this update.
We have a business account, and because of issues like this, access to anything CoPilot is blocked.
Interestingly, it is disabled by default for me.
Reading the github blog post "If you previously opted out of the setting allowing GitHub to collect this data for product improvements, your preference has been retained—your choice is preserved, and your data will not be used for training unless you opt in."
Is this the new name for the setting? I cannot find one that sounds like the previous one you mention
Notable that they have no "privacy" section in account settings
Me too, which is making me wonder if they're planning on silently flipping this setting on April 24th (making it impossible to opt out in advance).
We are not. The reason we wanted to announce early was so that folks had plenty of time to opt-out now. We've also added the opt-out setting even if you don't use Copilot so that you can opt-out now before you forget and then if you decide to use Copilot in the future it will remember your preference.
Would you be able to comment on https://news.ycombinator.com/item?id=47522876, i.e. explain the legal basis for this change for EU based users? If there is none, you may have to expect that people will exercise their right to lodge a complaint with a supervisory authority.
Why would you expect an engineer to be able to comment on legal affairs? Presumably it was cleared with Microsoft's legal department or whatever GitHub's divisional equivalent is.
Is it because I'm in the EU?
I'm in the US and it's off for me. I believe I've previously opted out of everything copilot related in the past if there was anything.
I'm in Canada, so not only the EU at least.
I guess we have to check out again on April 24 ?
What did everyone expect? I can't understand this community's trust of microsoft or startups. It's the typical land grab: start off decent, win people over, build a moat, then start shaking everybody down in the most egregious way possible.
It's just unusual how quickly they're going for the shakedown this time
> Do they have this set on business accounts also by default? If so, this is really shady.
Looks like not, but would it actually have been shadier, or are we just used to individual users being fucked over?
If they turned it on for business orgs, that would blow up fast. The line between "helpful telemetry" and "silent corporate data mining" gets blurry once your team's repo is feeding the next Copilot.
People are weirdly willing to shrug when it's some solo coder getting fleeced instead of a company with lawyers and procurement people in the room. If an account tier is doing all the moral cleanup, the policy is bad.
The individual/corporate asymmetry you're describing is standard across B2B SaaS. Slack, Notion, and Figma all include ML training carve-outs in enterprise DPAs that free users don't get. GitHub isn't doing anything unusual here — they're just doing it with code, which feels more sensitive than documents or messages because it might literally be your employer's IP you're working on from a personal account.
The interesting nuance is the enforcement mechanism. martinwoodward clarified below that exclusion happens at the user level, not the repo level: if you're a member of a paid org, your interaction data is excluded even on a free personal Copilot account. That's actually more protective than I expected — it handles the contractor case where someone works across multiple repos of varying org types.
The remaining ambiguity is temporal: if someone leaves an org, do their historical interactions get retroactively included? Policy answers to that question are hard to verify and even harder to audit.
Fun fact: Copilot gives you no way to ignore sensitive files with API keys, passwords, DB credentials, etc.: https://github.com/orgs/community/discussions/11254#discussi...
So by default you send all this to Microsoft by opening your IDE.
Separate fun fact: Gemini CLI blocks env vars with strings like 'AUTH' in the name. They have two separate configuration options that both let you allow specific env vars. Neither work (bad vibe coding). Tried opening an issue and a PR, and two separate vibe-coding bots picked up my issue and wrote PRs, but nobody has looked at them. Bug's still there, so can't do git code signing via ssh agent socket. Only choice is to do the less-secure, not-signed git commits.
On top of that, Gemini 3 refuses to refactor open source code, even if you fork it, if Gemini thinks your changes would violate the spirit of the intent of the original developers in a safety/security context. Even if you think you're actually making it more secure, but Gemini doesn't, it won't write your code.
Gemini also won't help you with C++ if you are under 18, since it would be unsafe.
Is it still true? That's two years old
It's improved significantly in that time, but relative to the other frontier models, it is still the one that is the most condescending and coddling.
I use Gemini 3 to edit multiple forks. Your statement is false based on stuff I actually do.
Well it's true based on my running into the issue 8 hours ago
Maybe it's your prompts? I've never had Gemini refuse to write any code in any context. I use it with Claude prompts, edited down, in particular to remove guardrails.
You shouldn't use Google Ai products, they are inferior. Their models are quite good. It's confusing when people use the model name when referring to a product. What's your setup?
Fun fact: you shouldn't have sensitive files with API keys, passwords, DB credentials, etc. in your repo
“In your repo” and “in the directory you are running copilot” are two separate things.
Sadly, this issue is systemic: https://github.com/openai/codex/issues/2847
OpenCode has a plugin that lets you add an .ignore file (though I think .agentignore would be better). The problem is that, even though the plugin makes it so the agent can't directly read the file, there's no guarantee the agent will try to be helpful and do something like "well I can't read .envrc using my read tool, so let me cat .envrc and read it that way".
I swear I just set up enterprise and org level ignore paths.
Yeah, it's a Copilot Business/Enterprise feature
If I'm paying, which I am, I want to have to opt-in, not opt-out, Mario Rodriguez / @mariorod needs to give his head a wobble.
What on earth are they thinking...
> What on earth are they thinking...
@mariorod's public README says one of his focuses is "shaping narratives and changing \"How we Work\"", so there you go.
Translation: more alignment with Microsoft practices
"shaping narratives", sounds like they follow the methodologies of a current president
It looks like the literal translation of "manipulation" to Linkedin-speak.
which one?
Thanks to Github and the AI apocalypse, all my software is now stored on a private git repository on my server.
Why would I even spend time choosing a copyleft license if any bot will use my code as training data to be used in commercial applications? I'm not planning on creating any more opensource code, and what projects of mine still have users will be left on GH for posterity.
If you're still serious about opensource, time to move to Codeberg.
Made the same choice, my open source projects with users are in maintenance mode or archived. New projects are released via SaaS, compiled artifacts or not at all.
I scratch my open source itch by contributing to existing language and OS projects where incremental change means eventually having to retrain models to get accurate inference :)
Yeah, I'm guessing that probably because in their TOS you grant them some license work-around for running the service, which can mean anything.
I'm in my happy space selfhosting forgejo and having a runner on my own hardware
What is the legal basis of this in the EU? Ignoring the fact they could end up stealing IP, it seems like the collected information could easily contain PII, and consent would have to be
> freely given, specific, informed and unambiguous. In order to obtain freely given consent, it must be given on a voluntary basis.
It breaks GDPR easily: GDPR enforces you to comply with opt-out by default, no workaround by prefilling before hitting submit.
While some think this applies only to personal data, then yes. But it takes only one line of code to use my phone number for testing while I test locally a register form in the application I'm developing.
Once it gets sent to Copilot I can threaten with legal action if they are not taking it down.
Based on https://github.blog/changelog/2026-03-25-updates-to-our-priv..., it looks like they are going to go for “legitimate interest” which seems clearly overridden by data subject interests in this case, hence not lawful.
If you don't want to wait until your PII inevitably gets sent through, you can already now file a complaint to your local supervisory authority: https://www.edpb.europa.eu/about-edpb/about-edpb/members_en
I actually don’t seem to have this option on my GitHub settings page, which leads me to wonder if this only applies to Americans.
I actually did have to manually disable this from Germany, so it might be a different reason you don't have it?
I have the setting in Australia.
I'd be curious to see which countries are affected
> This approach aligns with established industry practices
"others are doing it too so it's ok"
Ackshually Anthropic is opt-in AND they give you discounts if you enable it
It’s opt-out, not opt-in, at least for Claude Desktop and Claude Code, unless you use the API.
What kind of discounts? I have never heard of this
Anthropic puts up random prompts defaulting to enabled to trick you into accidentally enabling.
So basically they want to retain everyone's full codebases?
> The data used in this program may be shared with GitHub affiliates, which are companies in our corporate family including Microsoft
So every Microsoft owned company will have access to all data Copilot wants to store?
Why is there no cancel copilot subscription option here?. Docs say there should be...
Mobile
https://github.com/settings/billing/licensing
EDIT:
https://docs.github.com/en/copilot/how-tos/manage-your-accou...
> If you have been granted a free access to Copilot as a verified student, teacher, or maintainer of a popular open source project, you won’t be able to cancel your plan.
Oh. jeez.
I appreciated the notification at the top of the screen because it prompted me to disable every single copilot feature I possibly could from my account. I also appreciated Microsoft for making Windows 11 horrible so I could fall back in love with Linux again.
For what it's worth they're not trying to hide this change at all and are very upfront about it and made it quite simple to opt out.
They didn't even link the setting in their email. They didn't even name it specifically, just vaguely gestured toward it. Dark patterns, but that's Microslop for ya
going to github i was greeted with a banner and a link directly to the settings for changing it
I've seen worse dark pattern to be honest... I don't think they're being malicious here.
They do not make it very simple to opt out. That is false.
On Android for instance I invite you to use the GitHub app and modify your opt-in or opt outside settings... You will find that nothing works on the settings page once you actually find the settings page after digging through a couple of layers and scrolling about 2 ft.
Microsoft doing dumb things once again.
Who in their right mind will opt into sharing their code for training? Absolutely nobody. This is just a dark pattern.
Btw, even if disabled, I have zero confidence they are not already training on our data.
I would also recommend to sprinkle copyright noticed all over the place and change the license of every file, just in case they have some sanity checks before your data gets consumed - just to be sure.
Serious question: let's say I host my code on this platform which is proprietary and is for my various clients. Who can guarantee me that AI won't replicate it to competitors who decide to create something similar to my product?
If the code is ever visible to anyone else ever, you have no guarantee. If it’s actually valuable, you have to protect it the same way you’d protect a pile of gold bars.
What does “my code...for my clients” mean (is it yours or theirs)? If it’s theirs let them house it and delegate access to you. If they want to risk it being, ahem...borrowed, that’s their business decision to make.
If it’s yours, you can host it yourself and maintain privacy, but the long tail risk of maintaining it is not as trivial as it seems on the surface. You need to have backups, encrypted, at different locations, geographically distant, so either you need physical security, or you’re using the cloud and need monitoring and alerting, and then need something to monitor the monitor.
It’s like life. Freedom means freedom from tyranny, not freedom from obligation. Choosing a community or living solo in the wilderness both come with different obligations. You can pay taxes (and hope you’re not getting screwed, too much), or you can fight off bears yourself, etc.
It’s not clear to me how GitHub would enforce the “we don’t use enterprise repos” stuff alongside “we will use free tier copilot for training”.
A user can be a contributor to a private repository, but not have that repository owner organisation’s license to use copilot. They can still use their personal free tier copilot on that repository.
How can enterprises be confident that their IP isn’t being absorbed into the GH models in that scenario?
We do not train on the contents from any paid organization’s repos, regardless of whether a user is working in that repo with a Copilot Free, Pro, or Pro+ subscription. If a user’s GitHub account is a member of or outside collaborator with a paid organization, we exclude their interaction data from model training.
For private repositories under a personal account, if the repo owner has opted out of model training but a collaborator has not, would the collaborator's Copilot interactions with that repo still be used for training?
Thank you for clarifying this.
Quite simply, that's just a matter of the corporate internal policy and its (lack of) enforcement. This problem is just a subset of the wider IP breach with some people happily feeding their work documents into the free tier of ChatGPT.
I am not certain this is that big of a deal outside of "making AI better".
At this point, is there any magic in software development?
If you have super-secret-content is a third party the best location?
They've had ample access to the final output - our code, but they still hope with enough data on HOW we work they can close the agentic gap and finally get those stinky, lazy humans that demand salary out of the loop.
How about "no." You may be okay giving away your individual rights, including to copyright, but I am not.
The fact that this is on by default, especially for paid accounts and even more especially for organizations, where certain types of privacy is sometimes mandated by the industry your business is in, is ridiculous.
There should also be a much easier one-click to opt out without having to scroll way down on the settings page.
I just checked my Github settings, and found that sharing my data was "enabled".
This setting does not represent my wishes and I definitely would not have set it that way on purpose. It was either defaulted that way, or when the option was presented to me I configured it the opposite of how I intended.
Fortunately, none of the work I do these days with Copilot enabled is sensitive (if it was I would have been much more paranoid).
I'm in the USA and pay for Copilot as an individual.
Shit like this is why I pay for duck.ai where the main selling point is that the product is private by default.
They use data from the poor student tier, but arguably, large corporates and businesses hiring talented devs are going to create higher quality training data. Just looking at it logically, not that I like any of this...
On my Android phone I was able to change the setting using Firefox by logging into GitHub and not allowing it to launch the GitHub app.
I was unable to change the setting when I used the GitHub app to open up the web page in a container.. button clicks weren't working. Quite frustrating.
I wish GitHub would focus on making their service reliable instead of Copilot and opting folks into their data being stolen for training.
Mine was defaulted to disabled. I’m on the Education pro plan (academic), so maybe that’s different than personal?
I have GitHub Copilot Pro. I don't believe I signed up for it. I neither use it nor want it.
1. A lot of settings are 'Enabled' with no option to opt out. What can I do?
2. How do I opt out of data collection? I see the message informing me to opt out, but 'Allow GitHub to use my data for AI model training' is already disabled for my account.
Hey David - if you want to send me (martinwoodward at github.com) details of your GitHub account I can take a look. At a guess I suspect you are one of the many folks who qualified for GitHub Copilot Pro for free as a maintainer of a popular open source project.
Sounds like you are already opted out because you'd previously opted out of the setting allowing GitHub to collect this data for product improvements. But I can check that.
Note, it's only _usage_ data when using Copilot that is being trained on. Therefore if you are not using Copilot there is no usage data. We do not train on private data at rest in your repos etc.
Cheers!
I'm ready to abandon Github. Enschitification of the world's source infrastructure is just a matter of time.
So, how does this work with source-available code, that’s still licensed as proprietary - or released under a license which requires attribution?
If someone takes that code and pokes around on it with a free tier copilot account, GitHub will just absorb it into their model - even if it’s explicitly against that code’s license to do so?
Most of the new culture and website contents is under full copyright. How much of an obstacle was that to these companies?
Bold move. Who uses Copilot these days? Unless they have free credit I mean.
Finally. The option for me to enable Copilot data sharing has been locked as disabled for some time, so until now I couldn't even enable it if I wanted to.
Two issues with this:
1- Vulnerabilities, Secrets can be leaked to other users. 2- Intellectual Property, can also be leaked to other users.
Most smart clients won't opt-out, they will just cut usage entirely.
That's me. Frankly, looking at just uninstalling VSCode because Copilot straight-up gets in the way of so much, and they stopped even bothering with features that are not related to it (with one exception of native browser in v112, which, admittedly, is great)
VSCode can be cleaned: https://github.com/VSCodium/vscodium
(I prefer Emacs anyway, but VSCode is a worthy tool.)
making this option opt-in by default is a very shady choice, GitHub.
Checked and mine was already on disabled. Don't remember if I previously toggled it or not..
If you previously opted out of the setting allowing GitHub to collect data for product improvements, your preference has been retained here. We figured if you didn't want that then you definitely wouldn't want this..
> Content from your issues, discussions, or private repositories at rest. We use the phrase “at rest” deliberately because Copilot does process code from private repositories when you are actively using Copilot. This interaction data is required to run the service and could be used for model training unless you opt out.
Sounds like it's even likely to train on content from private repositories. This feels like a bit of an overstep to me.
We all knew Microsoft was going to destroy GitHub eventually when it was first bought.
How much longer do you want to tolerate the enshittification? How much longer CAN you tolerate it?
Is it legal ? Surely not in any EU countries.
Does it even matter? They trained AI on obviously copyrighted and even pirated content. If this change is legally significant and a legal breach, the existence of all models and all AI businesses also is illegal.
It might or might not be legal, but it seems materially worse to screw over your direct customers than to violate the social-contracty nature of copyright law. But hey ho if you're not paying then you're the product, as ever was.
At least one instance where it was enabled in EU countries as well.
If this doesn't sound bad enough, it's possible that Copilot is already enabled. As we know this kind of features are pushed to users instead of being asked for.
Maybe it's already active in our accounts and we don't realize it, so our code will be used to train the AI.
Now we can't be sure if this will happen or not, but a company like GitHub should be staying miles away from this kind of policy. I personally wouldn't use GitHub for private corporate repositories. Only as a public web interface for public repos.
So I do all the work of thinking about how to do something, and as soon as I tell Copilot about it, not it's in the training data and anyone can ask the LLM and it'll tell them the solution I came up with? Great. I'm going to cancel.
ill be moving off github now
> From April 24 onward, interaction data—specifically inputs, outputs, code snippets, and associated context—from Copilot Free, Pro, and Pro+ users will be used to train and improve our AI models unless they opt out.
Now is the time to run off of GitHub and consider Codeberg or self hosting like I said before. [0]
Codeberg doesn't support non OSS and I'd rather just have one 'git' thing I have to know for both OSS and private work. So it's not a great option, IMO. Self-hosting also for other reasons.
I'm not sure there are any good GitHub alternatives. I don't trust Gitlab either. Their landing page title currently starts with "Finally, AI". Eek.
Maybe sourcehut? https://sourcehut.org
It's an option but I can't really take the platform seriously when the owner removes content based on his personal whims. He currently removes crypto projects because of their 'social ills'. I don't work on crypto, but he might start deleting AI projects for the same reason, say.
(oops)
It’s currently March
Oops. Thank you for correcting me!
As it's enabled by default, does that mean everything has already been siphoned off and now I'm just closing the gate behind the animals escaping?
Shit like this shouldn't be allowed.
Why won't people like to make the models better? Aren't we all getting the benefit after all?
That's akin to being grateful for your local shop owner that they allowed you to sweep the floor for other customers.
Please don’t strawman me, I asked completely different question.
It’s not about being grateful or something, but that many people (devs) are too concerned about their code being stolen as if they’ve come up with something unique and the LLMs are some kind of database (which it isn’t).
At the end of the day we’re going to be using AI to write all the code, many of us already doing that. And if some GitHub copilot model would be better - we’re getting more quality code that is generally available for next pretraining runs (for your and other models). Some would even switch to copilot if it’s good.
What do you think about it?
People would have a different response if they did not, in my view accurately, perceive that wool is being pulled over their eyes.
If something is mine by right, no matter how little or lot worth it has noone shoule be allowed to force/trick me to donate it. It should just be my choice.